> Microservices only pay off when you have real scaling bottlenecks, large teams, or independently evolving domains. Before that? You’re paying the price without getting the benefit: duplicated infra, fragile local setups, and slow iteration. For example, Segment eventually reversed their microservice split for this exact reason — too much cost, not enough value.
Basically this. Microservices are a design pattern for organisations as opposed to technology. Sounds wrong but the technology change should follow the organisational breakout into multiple teams delivering separate products or features. And this isn't a first step. You'll have a monolith, it might break out into frontend, backend and a separate service for async background jobs e.g pdf creation is often a background task because of how long it takes to produce. Anyway after that you might end up with more services and then you have this sprawl of things where you start to think about standardisation, architecture patterns, etc. Before that it's a death sentence and if your business survives I'd argue it didn't because of microservices but inspite of them. The dev time lost in the beginning, say sub 200 engineers is significant.
Some resume driven developers will choose microservices for startups as a way to LARP a future megacorp job. Startup may fail, but they at least got some distributed system experience. It takes extremely savvy technical leadership to prevent this.
I saw one startup with about fifty engineers, and dozens of services. They had all of the problems that the post describes. Getting anything done was nearly impossible until you were in the system for at least six months and knew how to work around all the issues.
Here’s the kicker: They only had a few hundred MAUs. Not hundreds of thousands. Hundreds of users. So all this complexity was for nothing. They burned through $50M in VC money then went under. It’s a shame because their core product was very innovative and well architected, but it didn’t matter.
> You'll have a monolith, it might break out into frontend, backend and a separate service for async background jobs
And when you break these out, you don't actually have to split your code at all. You can deploy your normal monolith with a flag telling it what role to play. The background worker can still run a webserver since it's useful for healthchecks and metrics and the loadbalancer will decide what "roles" get real traffic.
I put my team through this as an inexperienced lead about 15 years ago. We were a team of less than a dozen who had a nice single solution file that you could build and run the entire stack from. At the end we were looking at roughly a dozen services all which required orchestration to get them running and working together. First hand lessons in YAGNI and "do the simplest thing that works" which have stuck with me the rest of my career.
I thought the linked article about how Khan Academy eventually migrated to multiple services was a good example of when introducing micro services is a good idea:
They had already scaled the mono service about as far as it could go and had a good sense of what the service boundaries should be based on experience.
There are plenty of tech reasons for microservices. e.g. scaling high traffic services separately and separating low priority functionality from critical paths. I would agree that this is usually not a smart thing to do in a small org, but I have seen times where splitting out a high load path into a microservice has been very much worth it at a startup.
> Microservices only pay off when you have (...) independently evolving domains.
I don't see any major epiphany in this. In fact, it reads like a tautology. The very definition of microservice is that it's an independently evolving domain. That's a basic requirement.
> Microservices are a design pattern for organisations as opposed to technology
Very true in my experience. The main benefit is letting small groups of people work independently without stepping on each other’s toes. Although I’ve worked on a project where multiple teams owned micro services that were supposed to be standardized with each other, and it just lead to endless meetings and requirements churn since nobody was willing to work on the other teams service but everyone had an opinion on what the cross-team standard should be. Learning the diplomatic way to say “mind your own business” was more important than any technical skills for getting code merged.
I've tended to use microservices in limited cases where the system had to serve a few requests that had radically different performance requirements, particularly memory utilization. I had a PHP server for instance that served exactly one URL for which PHP was not a good fit and a specialized server in another language for that one URL gave like 1000x better performance and money savings in terms of not needing a much bigger PHP server.
Using Spring or Guava in the Java world it is frequent that people write "services" that are simply objects that implement some interface which are injected by the framework. In a case like that you can imagine a service could have either an in-process implementation or an out-of-process implementation (e.g. via a web endpoint or some RPC.) Frameworks like that normally are thinking at the level of "let's initialize one application in one address space at a time" but it would be nice to see something oriented towards managing applications that live in various address spaces.
Trouble is that some people get this squee when they hear they can use JDK 9 for this project and JDK 10 for another project and JDK 11 for another project and they'd rather die than eschew the badly broken Python 3.5 for something better. If you standardized absolutely everything I think you could be highly productive with microservices because you wouldn't have to face gear switching or deal with teams who just don't know that XML serialization worked completely differently in JDK 7 vs JDK 8 thus the services they make don't quite communicate properly, etc.
Sounds right, no? Service is what people provide, implied in the scope of a macro economy. Microservice then implies the same type of service, but within the micro economy of a single business.
> Microservices are a design pattern for organisations as opposed
> to technology ... breakout into multiple teams
I agree, but just saying "multiple teams" has led many eng directors to think "I have two squads now --> omg they cannot both be in the same monolith".
When both squads are 5 people each.
And the squads re-org (or "right size") every 9 months to re-prioritize on the latest features.
Five years go by, 7 team/re-org changes, all of which made sense, but thank god we didn't microservice on the 2nd/3rd/4th/5th/6th team boundaries. :grimmacing:
We should stay "stable, long-lived teams" -- like you need to have a team that exists with the same ownership and mandate for ~18 months to prove its a stable entity worth forming your architecture around.
I know about a org with ~2-3 devs who decided microservices would be cool. I warned not to go that way because they would surely face delivery and other issues which they wouldn't have when building the solution based on a architecture archetype which could be a better fit for the team and solution, which I evidently decided should be a modular monolith. (the codebase at that point was already a monolith, in fact, but had a large amount of tech debt due to the breakneck speed in which features needed to be released)
They ignored me and went the microservices way.
Guess what?
2 years later the rebuild of the old codebase was done.
3 years later and they are still fighting delivery and other issues they would never have had if they didn't ignore me and just went for the "lame" monolith.
Moral of this short story: I can personally say everything this article says is pretty much true.
> 3 years later and they are still fighting delivery and other issues
Having added a fancy new technology and a "successful" project to their resume, they're supposed to move on to the next job before the consequences of their actions are fully obvious.
I suspect a lot of the issues teams encounter with microservices stem from a lack of cohesive understanding of microservices.
If people on the team continue to think about the "system" as a monolith (what they already know and are comfortable with), you'll hit friction ever step of the way from design all the way out to deployment. Microservices throw out a lot of traditional assumptions and designs, which can be hard for people to subscribe to.
I think there has to be adequate "buy-in" throughout the org for it to be successful. Turning an existing mono into microservices is very likely to meet lots of internal resistance as people have varying levels of being "with it", so-to-speak.
> 2 years later the rebuild of the old codebase was done.
>
> 3 years later and they are still fighting delivery and other issues they would never have had if they didn't ignore me and just went for the "lame" monolith.
I had a similar experience setting up the Infra for an 8-12 microservice application. The project had been dragging and no one really understood what they were doing. When I started asking scale questions, the answer came back that this was an internal admin UI for 9-5 business that would have 5-10 users.
One place I worked at got sold on microservices by Thoughtworks, along with a change to Java as the main language to be used.
As one would expect, they made bank from their consulting endeavor and rode off into the sunset while the rest of us wasted several years of our careers rewriting ugly but functional monolithic code into distributed Java based microservices. We could have been working on features and product but essentially were justifying a grift, adding new and novel bugs as we rebuilt stable APIs from scratch.
The company went under not long after the project was abandoned. Nobody, of course, would be held to account for it. I will no longer touch a tech consultancy like TW with a 10 foot barge pole.
grug mention grug brain. grug also have grug brain. grug like grug. grugs together strong unless too many grugs then Overgrug think 9 grugs make baby grug in one month and grug not think it work like that
Micro services show their benefits in a large organization.
It’s a tool to solve people issues. They can remove bureaucratic hurdles and allow devs to somewhat be autonomous again.
In a small startup, you really don’t gain much from them. Unless if the domain really necessitates them, eg. the company uses Elixir but all of the AI toolings are written in Python/Go.
If your application has different load or resource requirements, you should build separate services, even in a startup.
You can put most of your crud and domain logic in a monolith, but if you have a GPU workload or something that has very different requirements - that should be its own thing. That pattern shouldn't result in 100 services to maintain, but probably only a few boundaries.
Bias for monolith for everything, but know when you need to carve something out as its own.
microservices can also cause organizational dependencies and coordination that wouldn't otherwise be necessary. i've seen it create at least as many people issues as solve them. one seemingly innocuous example is the policy of 'everybody just uses whatever services they want', which can hugely increase the ongoing maintenance requirements and seems to require that everyone learn everything in order to be functional. which never happens, which means you're always chasing people down.
Microservices are the software architecture analog to Conway's Law. You can't help but introduce some sort of significant architecture boundary at the boundary between teams, and while that doesn't have to be "microservices" that's certainly a very attractive option. But on the flip side, introducing those heavier-weight boundaries on to yourself, internal to a team, can be very counterproductive.
I can't prove this scales up forever but I've been very happy with making sure that things are carefully abstracted out with dependency injection for anything that makes sense for it to be dependency-injected, and using module boundaries internally to a system as something very analogous to microservices, except that it doesn't go over a network. This goes especially well with using actors, even in a non-actor-focused language, because actors almost automatically have that clean boundary between them and the rest of the world, traversed by a clean concept of messages. This is sometimes called the Modular Monolith.
Done properly, should you later realize something needs to be a microservice, you get clean borders to cut along and clean places to deal with the consquences of turning it into a network service. It isn't perfect but it's a rather nice cost/benefit tradeoff. I've cut out, oh, 3 or 4 microservices out of monoliths in the past 5 years or so. It's not something I do everyday, and I'm not optimizing my modular monoliths for that purpose... I do modular monoliths because it is also just a good design methodology... but it is a nice bonus to harvest sometimes. It's one of the rare times when someone comes and quite reasonably expects that extracting something into a shared service will be months and you can be like "would you like a functioning prototype of it next week"?
Conway's law is about communication, not team boundaries. There is no requirement that we introduce a significant architectural boundary at the boundary between teams: companies choose to do so to avoid having cross-team communication.
The only way for significant architectural boundaries at team boundaries to not result in incredibly painful software, especially for a growing team, is to let the software organize the teams. Which means reorging the company whenever you need to refactor, and somehow guessing right about how many changes each component will need in the coming year.
It also means you can't have product and engineers explore a problem together, or manage by objective with OKRs since engineers aren't connected to business outcomes.
I know that all the ex-Amazonians are convinced this is the only way to build software, but it really, really isn't.
Microservices make sense from a technical perspective in startups if:
- You need to use a different language than your core application. E.g. we build Rails apps but need to use R for a data pipeline and 100% could not build this in ruby.
- You have 1 service that has vastly different scaling requirements that the rest of your stack. Then splitting that part off into it's own service can help
- You have a portion of your data set that has vastly different security and lifecycle requirements. E.g. you're getting healthcare data from medicare.
Outside of those, and maybe a few other edge cases, I see basically no reason why a small startup should ever choose microservices... you're just setting yourself up for more work for little to no gain.
Splitting off a few services from an application is not the same as using microservices. With microservices you split off basically everything that would be a module in a normal application.
In addition to having 1 service with vastly different scaling requirements, having 1 service with vastly different availability requirements may make sense to separate as well.
If you need to keep the lights or maintain an SLA and can do so by separating a concern, it can really reduce risk and increase speed when deploying new features on "less important" components.
One of the advantages of the BEAM / OTP ecosystem (Erlang, Elixir, and friends) is that you can construct "microservices" and think through what that means, all within a monolith. When it comes time to break it out, you can.
> Microservices only pay off when you have real scaling bottlenecks, large teams, or independently evolving domains.
The BEAM language platform can cover scaling bottlenecks (at least within certain ranges of scale) and independently evolving domains, but has many of the advantages of working with a monolith when the team is small and searching for product-fit.
Like anything there are tradeoffs. The main one being that you'd have to learn how to write code with immutable data structures, and you have to be more thoughtful on how concurrent processes talk to each other, and what kind of failure modes you want to design into things. Many teams don't know how to hire for more Erlang or Elixir developers.
The biggest wins for microservices aren't really technical, they're organizational. They force you to break a problem down and allow each team to own a piece of it, including end to end delivery. This allows specialization of labor which is a key driver of productivity - including an ability to experiment and innovate. Every change is incremental by default, and well-documented external APIs are the only way to talk to other domains- no shared databases, filesystems, or internal APIs. It's not free and definitely takes some discipline and tooling to enforce shared standards (every service should have metrics, logging, tracing, discovery, testing, CI/CD, etc) but you'd need to build that muscle with a monolith as well.
Could kept infra as a code, logging, auth and so on in packages, gRPC or message queues for communication, telemetry, monitoring/alerts and more stuff as a code too... got to the point creating new service was just new repo, name, port a resource utilization.
Agree with organizational win, also smaller merge requests in the team were superb.
Around 5-10 devs, monolith, we ran into conflicts more often, deployment, bigger merge requests, releasing by feature was problematic, microservices made team more productive, but rules about tests/docs/endpoints/code were important.
I pretty much agree with everything in this article — it’s next to impossible service boundaries right in a startup environment.
Though, if you’re on a small team and really want to use micro services two places I have found it to be somewhat advantageous:
* wrapping particularly bad third party APIs or integrations — you’re already forced into having a network boundary, so adding a service at the boundary doesn’t increase complexity all that much. Basically this lets you isolate the big chunk of crappy code involved in integrating with the 3rd party, and giving it a nice API your monolith can interact with.
* wrapping particularly hairy dependencies — if you’ve got a dependency with a complex build process that slows down deployments or dev setup — or the dependency relies on something that conflicts with another dependency — wrapping it in its own service and giving it a nice API can be a good way to simplify things for the monolith.
You only need microservices for massive scale or to enable micromanagement of teams, but that doesn't mean you have to give up on clear module boundaries.
You can get the architectural benefits of microservices by using message-passing-style Object-Oriented programming. It requires the discipline not to reach directly into the database, but assuming you just Don't Do That a well-encapsulated "object" is a microservice that runs in the same virtual machine as the other mircoservices.
Java is the most mainstream language that supports that: whenever you find yourself reaching for a microservice, instead create a module, namespace the database tables, and then expose only the smallest possible public interface to other modules. You can test them in isolation, monitor the connections between them, and bonus: it is trivial to deploy changes across multiple "services" at the same time.
They have their place. In my experience, a good rule of thumb[0] is if there are actual benefits from being a standalone service.
For example, we have a authentication microservice at work. It makes sense that it lives outside of the main application, because its used in a multiple different contexts and the service boundary allows for it to be more responsive to changes, upgrades and security fixes than having it be part of the main app, and it deploys differently than the application. It also adds enough intentional friction that we don't accidentally put logic where it doesn't belong as part of the user authentication process. It has helped keep the code focused on only primary concerns.
That said, you can't apply any of these patterns blindly, as is so often the case. A good technical leader should push back when the benefits don't actually exist. The real issue is lack of experience making technical decisions on merits.
This includes high level executive leaders in the organization. At a startup especially, they are still often involved in many technical decisions. You'd be surprised (well maybe not!) how the highest leadership in a company at a startup will mandate things like using microservices and refuse to listen to anything running counter to such things.
Most “benefits” assumed from separation can be achieved with clear interfaces and modular monoliths, without the cognitive and operational tax microservices impose.
> It also adds enough intentional friction that we don't accidentally put logic where it doesn't belong as part of the user authentication process.
Preventing misplaced logic is a matter of good code structure, well defined software development processes and team discipline - not something that requires splitting into a separate microservice, and definitely not something that you want to solve on system architecture level.
My current take on microservices is that people pay serious attention to modularity and API design in the context of microservices. They work hard to break down the problem properly and design good interfaces between parts of the system.
In monoliths, they generally don't.
There's no logical reason why you couldn't pay as much attention to decomposition and API design between the modules of a monolith. You could have the benefit of good design without all the architectural and operational challenges of microservices. Maybe some people succeed at this. But in practice I've never seen it. I've seen people handle the challenges of microservices successfully, and I've never seen a monolith that wasn't an incoherent mess internally.
This is just my experience, one person's observations offered for what they're worth.
In practice, in the context of microservices, I've seen an entire team work together for two weeks to break down a problem coherently, holding off on starting implementation because they knew the design wasn't good enough and it was worth the time to get it right. I've seen people escalate issues with others' designs because they saw a risk and wanted to address it.
In the context of monoliths, I've never seen someone delay implementation so much as a day because they knew the design was half-baked. I rarely see anyone ask for design feedback or design anything as a team until they've screwed something up so badly that it can't be avoided. People sometimes make major design decisions in a split second while coding. What kind of self-respecting senior developer would spend a week getting input on an internal code API before starting to implement? People sometimes aren't even aware that the code they wrote that morning has implications for code that will be written later.
Theoretically this is okay because refactoring is easy in a monolith. Right? ... It is, right?
I'm basically sold on microservices because I know how to get developers to take design seriously when it's a bunch of services talking to each other via REST or grpc, and I don't know how to get them to take the internal design of a monolith seriously.
Every good monolith I've worked in (and I have worked in several, including one that was more than twenty years old) was highly-modular, well-designed with an easy-to-explain architecture.
The other thing they had in common was that code reviews talked about the aesthetics of the code and design, instead of just hunting for errors or skimming for security problems. It was relatively common to throw out the first proposed PR and start over, and that was fine because people were slicing the work small enough they were posting four to six PRs a week anyway.
It took the engineers at the company being willing to collaborate on the craft of software development and prioritize the long-term health of the code over short-term feature delivery. And the result of being willing to go a little bit slower day-to-day was that the actual feature delivery was faster than anywhere else I've ever worked.
Without a functioning professional culture, nothing is going to be great. But at least with microservices people do have to design an API at some point.
I'll go against the grain and say that microservices have advantages for small dev teams embedded in non-tech orgs.
1. You get to minimize devops/security/admin work. Really a consequences of using serverless tooling, but you land on a something like a microservices architecture if you do.
2. You get can break out work temporally. This is the big one - when you're a small team supporting multiple products, you often don't have continuity of work. You have one project for a few months, completely unrelated product for another few months. Microservice architectures are easier to build and maintain in that environment.
Watch out for bit rot, though: it is very easy for a startup to come back to one of those microservices six months later and discover the dependencies are borked and it no longer even builds.
Each repo you create is one more set of Dependabot alerts you need to keep on top of.
Years ago I attended a local meetup where the CTO of a local startup gave a presentation on their, mostly successful, microservice rollout.
In the Q&A after ward, another local startup CTO asked about problems their company was having with their microservices.
The successful CTO asked two questions: "How big is your microservices tooling team?" and "How big is your Dev Ops Team?"
His point was, if you're development team is not big enough to afford dedicated teams to tooling and dev ops, it's not big enough to afford microservices.
Was in org with 10 people devops dedicated team, it was smooth, also as a dev could push requests for their repos... but also only 3 devops and they were so busy my requirement for basic stuff was burried in backlog. You can develop but still need to maintain from the to time.
My friend briefly worked at a company where every API was a lambda. Each lambda had a git repo. Lambdas would often call into other lambdas. In order to make a feature, it might involve touching 10+ lambdas. They had over 200 lambdas after a year. Total nightmare
I think the major issue that I see, and I could be wrong is that if you want to change some underlying functionality between you and a dependent function to do that you would need to change all the intermediate functions only so that you could call that dependent function and layers deep.
I have played around with architectures like this, but I allowed the caller to patch in a dependent function in the call with those function overlay overrides were passed from function to function.
Monolith really is the best path and I question if you couldn't make it work in ~100% of cases if you genuinely tried to.
One should consider if they can dive even deeper into the monolithic rabbit hole. For example, do you really need an external hosted SQL provider, or could you embed SQLite?
From a latency & physics perspective, monolith wins every time. Making a call across the network might as well take an eternity by comparison to a local method. Arguments can be made that the latency can be "hidden", but this is generally only true for the more trivial kinds of problems. For many practical businesses, you are typically in a strictly serialized domain which means that you are going to be forced to endure every microsecond of delay. Assuming that a transaction was not in conflict doesn't work at the bank. You need to be sure every time before the caller is allowed to proceed.
The tighter the latency domain, the less you need to think about performance. Things can be so fast by default that you can actually focus on building what the customer is paying for. You stop thinking about the sizes of VMs, who's got the cheapest compute per dollar and other distracting crap.
>I question if you couldn't make it work in ~100% of cases if you genuinely tried to.
You could say this about almost any pattern, if you genuinely tried to make microservices work it could work in ~100% of cases, I'm sure of that.
Its this pattern of dismissing or accepting a solution with strong prejudice you don't evaluate the merits is the real problem. Thats the true behavior we need to get away from.
We as an industry may find, that modular monoliths trend toward the top as a result (I hate to speculate too much, every company is different and there are in fact other patterns of development beyond the two mentioned) but that would be a side effect if true. The real win is moving away from such prejudiced behavior
Containerization unfortunately pretty much killed embedded DBs; it's a shame, because you can squeeze a lot of performance out of not having to access the DB over a network.
I hope this is more common knowledge these days... but this is good framing and makes really clear the costs.
What this article doesn't cover... and where a good chunk of my career has been, is when companies are driven to break out into services, which might be due to scale, team size, or becoming a multi-product company. Whatever the reason, it can kill velocity during the transition. In my experience, if this is being done to support becoming multi-product, this loss in velocity comes at the worst time and can sink even very component teams.
As an industry, the gap between what makes sense for startups and what makes sense for scale can be a huge chasm. To be clear, I don't think it means you should invest in micro-services on the off-chance you need to hit scale (which I think is where many convince themselves of) nor does it mean that you should always head to microservices even when you hit those forcing functions (scaling monoliths is possible!)
That said, modularity, flexibility, and easy evolution are super important as companies grow and I do really think the next generation of tools and platforms will be benefit to better suiting themselves to evolution and flexibility than they do today. One idea I have thought for some time is platforms that "feel" like a monolith, but are 1) more concrete in building firmer interfaces between subsystems and 2) have flexibility in how calls happen between these interfaces (imagine being able to run a subsystem embedded or transparently to move calls over an RPC interface). Certainly that is "possible" with well structured code in platforms today... but it isn't always natural.
I am not sure the answer, but I really hope the next 10 years of my career has less massive chasms crossed via huge multi-year painful efforts and more cautious, careful evolution enabled by well considered tool and platforms.
Has anyone tried something like Polylith which lets you build all your code like normal functions for local dev and testing and then seemlessly pull out parts to network services as needed?
The article almost gets there, but the key is this:
Microservice architecture is a deployment strategy.
If you have a problem with deployments (eg large numbers of teams, perhaps some external suppliers running at different cadences, or with different tech stacks) the microservices are a fine solution to this.
I used to love monoliths, but I just can’t do it anymore. After many years of development, my brain simply resists building another one. My solo side project now consists of 12 AWS accounts, separate code repositories, separate pipelines, and separate infrastructure-as-code repositories. Some might say that’s insane—and fair point—but to me, it’s insane to pack everything together. If I know that my public API is sitting right next to customer data, that’s a red flag. No network isolation? Red flag. DNS management in the same account? Another red flag. To me, separation of concerns makes development much leaner. Things just work.
Pretty sure I saw someone say this in the past, but microservices might as well have been a psyop pushed out by larger, successful startups onto smaller, earlier-stage companies and projects. I say "might as well" because I don't think there's any evidence for it, but the number of companies and projects that have glommed onto the microservices idea, only to find their development velocity grind to a halt, has to be in the hundreds at least (thousands?). Whether the consequences were intended or not, microservices have been a gift on the competitive landscape for the startups that pushed microservices in the first place.
The best architecture approach I've ever found is:
1. Start with a monolith
2. If necessary, set up a job server that can be vertically/horizontally scaled and then give it a private API, or, give it access to the same database as the monolith.
For an overwhelming number of situations, this works great. You separate the heavy compute workloads from the customer-facing CRUD app and can scale the two independent of one another.
The whole microservices thing always seemed like an attempt by cloud providers to just trick you into using their services. The first time I ever played with serverless/lambda, I had a visceral reaction to the deployment process and knew it would end in tragedy.
I’ve read a lot of pros/cons about micro services over the last decade, but don’t have a clear definition for what qualifies.
My current job insists that they have a “simple monolith” because all the code is in a single repo. But that repo has code to build dozens of python packages and docker containers. Tons of deploy scripts. Different teams/employees are isolated to particular parts of the codebase.
It feels a lot like microservices, but I don’t know what the defining feature of microservices is supposed to be
Sounds like microservices deployed from a monorepo...
Which honestly may be the future if LLMs stay in a dev's toolkit. Plugging in an AI model to a monorepo provides so much context that can't be easily communicated across microservices in separate repos.
This article conflates the Monolith|Microservices and Monorepo|Polyrepo dichotomies. Although it is typical to choose Microservices and Polyrepo together or Monolith with Monorepo, it's not strictly necessary and the two architectural decisions come with different tradeoffs.
For example you may be forced to split out some components into separate services because they require a different technology stack to the monolith, but that doesn't strictly require a separate source code repository.
In 2016-17, I was involved with a rather large mictoservice-heavy rewrite project tha didn't go particularly well. The main reason was that microservices were actually a good fit for the _planned_ organisational structure, but not for the one that was eventually put in place. When you go from 4 vertically integrated independent teams to 2 backend devs, 2 frontend devs, and 1 "devops" without stopping 5 minutes to rethink the architecture, of course shit will happen.
I've worked in monoliths done poorly and well, as well as bad and good implementations of microservices (even if done for the wrong reasons). The part of this post on 'if you go microservices' doesn't state things strongly enough. My takeaways comparing what worked vs what didn't:
- Use one-way async messaging. Making a UserService that everything else uses synchronously via RPC/REST/whatever is a very bad idea and an even worse time. You'll struggle for even 2-nines of overall system uptime (because they don't average, they multiply down).
- 'Bounded context' is the most important aspect of microservices to get right. Don't make <noun>-services. You can make a UserManagementService that has canonical information about users. That information is propagated to other services which can work independently each using the eventually consistent information they need about users.
There's other dumb things that people do like sharing a database instance for multiple 'micro'-services and not even having separately accessible schemas. In the end if done well, each microservice is small and pleasant to work on, with coordination between them being the challenging part both technically and humanly.
Problem is that, come recruiting time, interview gatekeepers are filtering out candidates who don't have the shiny words of the season, see micro services, unit tests, lots of abstractions, etc. It's like a dating app game. Everyone knows is overblown but they are still playing the game. The idea that not every company needs to make the same architectural and technological decisions is a concept way too complex for interview gatekeepers.
Are unit tests a shiny fad? Second time I've seen it mentioned in this thread. Is there some other type of testing I should be doing, or have I been doing it all wrong for the last two decades?
Every service boundary you have to cross is a point of friction and a potential source of bugs and issues so by having more microservices you just have more than go wrong, by definition.
A service needs to maintain an interface for compatibility reasons. Each microservice needs to do that and do integration testing with every service they interact with. If you can't deploy a microservice without also updating all its dependencies then you don't have an independent service at all. You just have a more complicated deployment with more bugs.
The real problem you're trying to solve is deployment. If a given service takes 10 minutes to restart, then you have a problem. Ideally that should be seconds. But more ideally, you should be able to drain traffic from it then replace it however long it takes and then slowly roll it out checking for canary changes. Even more ideally, this should be largely automated.
Another factor: build times. If a service takes an hour to compile, that's going to be a huge impediment to development speed. What you need is a build system that caches hermetic artifacts so this rarely happens.
With all that above, you end up with what Google has: distributed builds, automated deployment and large, monolithic services.
Doing microservices badly is a tax. But you have to ask if you’ve checked the boxes before doing them.
Do you have standardization and reuse of things like linting, formatting, ci/cd pipelines, version stability, deployment patterns, monitoring integrations, integration and end to end testing, etc.? If you’re doing those things bespoke per repo/deployment, or if you don’t have roles dedicated to the support and maintenance, you’re not going to have a good time with microservices.
Do you have actual issues of scale where API hot paths are dominating your runtime? Are they horizontally scalable or bottlenecked on downstream dependencies (databases)? You can’t solve scale issues by just spinning microservices willy nilly (e.g. by domain topic).
Is your development environment sophisticated enough to actually run a stack? Or do you have supporting clusters that allow for local binding of services? If not, you’re going to struggle with microservice local development, and pay for a slow QA in staging.
Does all that require supporting roles and expertise? Yeap. If you’re a 5 person startup you probably don’t have that. If you’re a 150 person startup, you might.
I’ve seen Java monoliths with 11M lines of code that represented 80% of the production cost to run and the gradual break out of targeted APIs to microservices halved that while the monolith still lived on. I’ve seen queued microservice architectures ripping through tens of millions of events/requests a minute with less than a thousand pods (across the services) and a fraction of the resources of the monolith.
Ultimately there’s no free lunch in software and you shouldn’t pursue any design without understanding the tradeoffs.
Trying to force fit a monolith/modulith is also a problem. I've seen overreactions to fears about having separate services create suboptimal solutions. The real answer is that you have to make the right solution for the problem you're facing without lapsing into a dogma. Humans tend to be pretty bad at this and want simple rules of thumb. The only rule I would endorse almost blindly is, don't start with microservices. The rest depends on what happens with your business and in what time frame. After building a company from scratch to 15 devs and experiencing tons of services at Spotify, I would recommend:
- monolith to start with very little time spent on code architecture patterns like DDD (although these days with llms I would say go for it and use DDD patterns in your prompts)
- optimize code cleanliness by adhering to better code architecture patterns
- when it feels like you are doing weird things to scale a process on the monolith (scheduling background tasks where you could break out a pubsub to function service and defend your uptime while coordinating on a shared DB), drop any religion around no micro services.
Google had a really great paper on this about 2 years back titled Towards Modern Development of Cloud Applications[0] that talks about how teams often:
> ... conflate logical boundaries (how code is written) with physical boundaries (how code is deployed)
It's very easy to read and digest and I think it's a great paper that makes the case for building "modular monoliths".
I think many teams do not have a practical guide on how to achieve this. Certainly, Google's solution in this case is far too complex for most teams. But many teams can achieve the 5 core benefits that they mentioned with a simpler setup. I wrote a about this in a blog post A Practical Guide to Modular Monoliths with .NET[1] with a GitHub repo showing how to achieve this[2] as well as a video walkthrough[3]
This approach has proven (for me) to be easy to implement, package, deploy, and manage and is particularly good for startups with all of the qualities mentioned in the Google paper without much complexity added.
I don't know what the "right" answer is, but I worked at a company that built a fairly unwieldy monolith that was dragging everyone down as it matured into a mid-sized company. And, once you're successfully used at scale it becomes much more difficult to make architectural changes. Is there a middle ground? Is there a way to build a monolith while making it easier to factor apart services earlier rather than later? I don't know, and I don't think the article addresses that either.
The article does mention "invest in modularity", but to be honest, if you're in frantic startup mode dumping code into a monolith, you're probably not caring about modularity either.
Lastly, I would imagine it's easier to start with microservices, or multiple mid-sized services if you're relying on advanced cloud infra like AWS, but that has its own costs and downsides.
I can't imagine a small team following ALL the rules of microservices benefiting much at all. It makes no sense.
For large orgs where each service has a dedicated team it starts to make sense... but then it becomes clear that microservices are an organizational solution.
If a dev at a startup insisted on using windows for their laptop and it didn’t work with the infra, then wouldn’t they just be made to use Mac / Linux? All that effort just to support an operating system choice seems like a huge tax.
From my experience, microservices were great if there are more devs, organizational advantage over tech.
CI/CD - infra can be as code, shared across, K8s port-forward for local development, better resource utilization, multiple envs end so on, available tooling, if setup correctly, usually keeps working.
Not mentioned plus, usually smaller merge requests, feature can be split and better estimated, less conflicts during work or testing... possibility to share in packages.
Also if there are no tests, doesnt matter if its monorepo or MS, you can break easily or spend more time.
You should afford tests and documentation, keep working on tech debt.
Next common issue I see, too big tech stack cos something is popular.
I've seen microservices get introduced at companies... it never solved a real problem, it was more to scratch a developer's itch, or cargo cult ideas. It started to fall apart when they tried to figure out how to get an order service to fetch the prices of a product from the product pricing service, only to realise they need to hold onto the product price at the time of placing the order (it was a high volume / short product life cycle type of e-commerce), so uhh.. maybe we should duplicate this product into the order service? And then it would need to end up at a payment or invoicing service, more data duplication. And everything had to go through a central message bus to avoid web-like sprawl.
The other one was a microservice architecture in front of the real problem, a Java backend service that hid the real real problem, one or more mainframes. But the consultants got to play in their microservices garden, which was mostly just a REST API in front of a Postgres database that would store blobs of JSON. And of course these microservices would end up needing to talk to each other through REST/JSON.
I've filed this article in my "microservices beef" bookmarks folder if I ever end up in another company that tries to do microservices. Of course, that industry has since moved on to lambdas, which is microservices on steroids.
The problem is the "micro" part. Service oriented architecture is generally the way to go, but the service boundaries should be defined by engineering constraints, not as arbitrarily small.
Where I work, they consider a service managed full-time by a team of 2-8 people a "microservice." Before that, they had a monolith shared by a dept of ~120.
I always tell people if they can’t handle writing decent libraries they also won’t handle microservices. Especially when a 3 person team cranks out 15 microservices, ideally with different languages.
I think separately deployed services built from same monolithic codebase makes a lot of sense. You get to choose resources per service, but can get the benefits of sharing code/tests.
Totally agree. Micro services unnecessarily makes thing complicated for small teams. IMHO it solves the problem of velocity ONLY when a large engineering team is slowed down due to too much release & cross cutting dependencies on a monolith. Although I see people solving with modular monoliths, merge queues and CODEOWNERS effectively.
Few cases where microservices makes sense probably when we have a small and well bounded use-case like webhooks management, notifications or may be read scaling on some master dataset
I agree, most startups could do with a decent hypervisor plus vps for web visibility, but honestly selfhosting is fine. I'm surprise no one has built a startup environment in a box of boxes (pfsense/truenas/proxmox/minIO/openwrt)<should cover almost any techstack imaginable>, if you want bleeding edge, add microcloud from canonical or incus.
The opposite of "microservices" is not "monoliths". The organisation I work at has something like 250-300+ microservices all in a monolith. This is the best of both worlds for large applications, in my opinion.
(It's no coincidence that this company was largely loaded up with ex-Googlers in the early days).
Wish I could upvote this 100 times. At work we started off with the multi repo, multi service and we have burned countless hours managing dependencies, maintaining pipelines, and hacking together things to make local development work. Now are we trying to consolidate back to a monorepo.
The team responsible for a single microservice at a Big Tech company is often as large as, or even larger than, the entire engineering team of a startup. The same can be true for the size of the codebase. This is why it often doesn't make sense for a startup to introduce microservices.
I've found using Cloudflare Workers really productive, esp. their R2 and Durable Objects bindings. Are these technically "microservices" and should they be avoided if following trad software patterns?
Using them makes it easy to build endpoints for things like WhatsApp and other integrations
microservices are a gigantic waste of time. like TDD.
it takes skill and taste to use only enough of each. unfortunately a lot of VC $$$ has been spent by cloud companies and a whole generation or two of devs are permasoiled by the micro$ervice bug.
don't do it gents. monolith, until you literally cannot go further, then potentially, maybe, reluctantly, spin out a separate service to relieve some pressure.
Ha! I always feel more than a little embarrassed when it happens, but I can't sit idly by while TDD is slandered, especially from so seemingly oblique an angle!
While I agree with you regarding microservices (eg language abstractions provide 80% of the encapsulation SOA provides for 20% of the overhead) and I readily acknowledge that 100% test coverage is a quixotic fantasy, I really can't imagine writing reliable software without debuggers, print-statements, or a REPL—all of which TDD replaces in my workflow.
How, I wonder, do you observe the behavior of the program if not through tests? By playing with it? Manually reproducing state? Or, do you simply wait until after the program is written to test its functionality?
I wonder what mental faculties I lack that facilitate your TDD-less approach. Can it be learned?
There was a point in time (circa 2019-2020) when the madness got so severe that every new feature ended up as a microservice backed by a DB with a single table (plus a couple tables for API keys, migration tracking, etc.)
I love it when all my CRUD has to be abstracted over HTTP. /s
I see this a lot ("if you are a startup, just ship a monolith").
I think this is the wrong way to frame it. The advice should be "just do the scrappy thing".
This distinction is important. Sometimes, creating a separate service is the scrappy thing to do, sometimes creating a monolith is. Sometimes not creating anything is the way to go.
Let's consider a simple example: adding a queue poller. Let's say you need to add some kind of asynchronous processing to your system. Maybe you need to upload data from customer S3 buckets, or you need to send emails or notifications, or some other thing you need to "process offline".
You could add this to your monolith, by adding some sort of background pollers that read an SQS queue, or a table in your database, then do something.
But that's actually pretty complicated, because now you have to worry about how much capacity to allocate to processing your service API and how much capacity to allocate to your pollers, and you have scale them all up at the same time. If you need more polling, you need more api servers. It become a giant pain really quickly.
It's much simpler to just separate them then it is to try to figure out how to jam them together.
Even better though, is to not write a queue poller at all.
You should just write a Lambada and point it at your queue.
This is particularly true if you are me, because I wrote the Lambda Queue Poller, it works great, and I have no real reason to want to write it a second time. And I don't even have to maintain it anymore because I haven't worked at AWS since 2016. You should do this to, because my poller is pretty good, and you don't need to write one, and some other schmuck is on the hook for on-call.
Also you don't really need to think about how to scale at all, because Lambda will do it for you.
Sure, at some point, using Lambda will be less cost effective than standing up your own infra, but you can worry about that much, much, much later. And chances are there will be other growth opportunities that are much more lucrative than optimizing your computer bill.
There are other reasons why it might be simpler to split things. Putting your control plane and your data plane together just seems like a head ache waiting to happen.
If you have things that happen every now and then ("CreateUser", "CreateAccount", etc) and things that happen all the time ("CaptureCustomerClick", or "UpdateDoorDashDriverLocation", etc) you probably want to separate those. Trying to keep them together will just end up causing your pain.
I do agree, however, that having a "Users" service and an "AccountService" and a "FooService" and "BarService" or whatever kind of domain driven nonsense you can think of is a bad idea.
Those things are likely to cause pain and high change correlations, and lead to a distributed monolith.
I think the advice shouldn't be "Use a Monolith", but instead should be "Be Scrappy". You shouldn't create services without good reason (and "domain driven design" is not a good reason). But you also shouldn't "jam things together into a monolith" when there's a good reason not to. N sets of crud objects that are highly related to each other and change in correlated ways don't belong in different services. But things that work fundamentally differently (a queue poller, a control-plane crud system, the graph layer for grocery delivery, an llm, a relational database) should be in different services.
This should also be coupled with "don't deploy stuff you don't need". Managing your own database is waaaaaaay more work that just using Dynamo DB or DSQL or Big Table or whatever....
So, "don't use domain driven design" and "don't create services you don't need" is great advice. But "create a monolith" is not really the right advice.
> This distinction is important. Sometimes, creating a separate service is the scrappy thing to do, sometimes creating a monolith is. Sometimes not creating anything is the way to go.
I think this hits the nail on the head. People are trying to find the "one true way" for microservices vs monoliths. But it doesn't exist. It's context dependent.
It's like the DRY vs code duplication conversation. Trying to dictate that you will never duplicate code is a fool's errand, in the same way that duplicating code whenever something is slightly different is foolish.
You're probably on the early part of the curve where anything works - small team, simple product, no scale - come back when one or two of these changes...
asim|9 months ago
Basically this. Microservices are a design pattern for organisations as opposed to technology. Sounds wrong but the technology change should follow the organisational breakout into multiple teams delivering separate products or features. And this isn't a first step. You'll have a monolith, it might break out into frontend, backend and a separate service for async background jobs e.g pdf creation is often a background task because of how long it takes to produce. Anyway after that you might end up with more services and then you have this sprawl of things where you start to think about standardisation, architecture patterns, etc. Before that it's a death sentence and if your business survives I'd argue it didn't because of microservices but inspite of them. The dev time lost in the beginning, say sub 200 engineers is significant.
candiddevmike|9 months ago
dimal|9 months ago
Here’s the kicker: They only had a few hundred MAUs. Not hundreds of thousands. Hundreds of users. So all this complexity was for nothing. They burned through $50M in VC money then went under. It’s a shame because their core product was very innovative and well architected, but it didn’t matter.
singron|9 months ago
And when you break these out, you don't actually have to split your code at all. You can deploy your normal monolith with a flag telling it what role to play. The background worker can still run a webserver since it's useful for healthchecks and metrics and the loadbalancer will decide what "roles" get real traffic.
tstrimple|9 months ago
jayd16|9 months ago
You'll need services. They're hard. If something is hard but it needs to be done, you should get good at it.
Like every fad, there a backlash from people seeing the fad fall apart when used poorly.
Services are a good pattern with trade offs. Weigh the trade offs, just don't do things to do them.
jimbokun|9 months ago
https://blog.khanacademy.org/go-services-one-goliath-project...
They had already scaled the mono service about as far as it could go and had a good sense of what the service boundaries should be based on experience.
fallingknife|9 months ago
motorest|9 months ago
I don't see any major epiphany in this. In fact, it reads like a tautology. The very definition of microservice is that it's an independently evolving domain. That's a basic requirement.
asdf6969|9 months ago
Very true in my experience. The main benefit is letting small groups of people work independently without stepping on each other’s toes. Although I’ve worked on a project where multiple teams owned micro services that were supposed to be standardized with each other, and it just lead to endless meetings and requirements churn since nobody was willing to work on the other teams service but everyone had an opinion on what the cross-team standard should be. Learning the diplomatic way to say “mind your own business” was more important than any technical skills for getting code merged.
PaulHoule|9 months ago
Using Spring or Guava in the Java world it is frequent that people write "services" that are simply objects that implement some interface which are injected by the framework. In a case like that you can imagine a service could have either an in-process implementation or an out-of-process implementation (e.g. via a web endpoint or some RPC.) Frameworks like that normally are thinking at the level of "let's initialize one application in one address space at a time" but it would be nice to see something oriented towards managing applications that live in various address spaces.
Trouble is that some people get this squee when they hear they can use JDK 9 for this project and JDK 10 for another project and JDK 11 for another project and they'd rather die than eschew the badly broken Python 3.5 for something better. If you standardized absolutely everything I think you could be highly productive with microservices because you wouldn't have to face gear switching or deal with teams who just don't know that XML serialization worked completely differently in JDK 7 vs JDK 8 thus the services they make don't quite communicate properly, etc.
grg0|9 months ago
It's not wrong at all, literally Conway's law: https://en.wikipedia.org/wiki/Conway's_law
9rx|9 months ago
Sounds right, no? Service is what people provide, implied in the scope of a macro economy. Microservice then implies the same type of service, but within the micro economy of a single business.
stephen|9 months ago
I agree, but just saying "multiple teams" has led many eng directors to think "I have two squads now --> omg they cannot both be in the same monolith".
When both squads are 5 people each.
And the squads re-org (or "right size") every 9 months to re-prioritize on the latest features.
Five years go by, 7 team/re-org changes, all of which made sense, but thank god we didn't microservice on the 2nd/3rd/4th/5th/6th team boundaries. :grimmacing:
We should stay "stable, long-lived teams" -- like you need to have a team that exists with the same ownership and mandate for ~18 months to prove its a stable entity worth forming your architecture around.
mindcrash|9 months ago
They ignored me and went the microservices way.
Guess what?
2 years later the rebuild of the old codebase was done.
3 years later and they are still fighting delivery and other issues they would never have had if they didn't ignore me and just went for the "lame" monolith.
Moral of this short story: I can personally say everything this article says is pretty much true.
xnx|9 months ago
Having added a fancy new technology and a "successful" project to their resume, they're supposed to move on to the next job before the consequences of their actions are fully obvious.
abirch|9 months ago
Alupis|9 months ago
If people on the team continue to think about the "system" as a monolith (what they already know and are comfortable with), you'll hit friction ever step of the way from design all the way out to deployment. Microservices throw out a lot of traditional assumptions and designs, which can be hard for people to subscribe to.
I think there has to be adequate "buy-in" throughout the org for it to be successful. Turning an existing mono into microservices is very likely to meet lots of internal resistance as people have varying levels of being "with it", so-to-speak.
ellisv|9 months ago
Sounds to me like every startup.
jeffwask|9 months ago
WHY? Just why?
ljm|9 months ago
As one would expect, they made bank from their consulting endeavor and rode off into the sunset while the rest of us wasted several years of our careers rewriting ugly but functional monolithic code into distributed Java based microservices. We could have been working on features and product but essentially were justifying a grift, adding new and novel bugs as we rebuilt stable APIs from scratch.
The company went under not long after the project was abandoned. Nobody, of course, would be held to account for it. I will no longer touch a tech consultancy like TW with a 10 foot barge pole.
dkkergoog|9 months ago
[deleted]
jihadjihad|9 months ago
> grug wonder why big brain take hardest problem, factoring system correctly, and introduce network call too
> seem very confusing to grug
0: https://grugbrain.dev/#grug-on-microservices
jayd16|9 months ago
BoardsOfCanada|9 months ago
cgannett|9 months ago
didip|9 months ago
It’s a tool to solve people issues. They can remove bureaucratic hurdles and allow devs to somewhat be autonomous again.
In a small startup, you really don’t gain much from them. Unless if the domain really necessitates them, eg. the company uses Elixir but all of the AI toolings are written in Python/Go.
echelon|9 months ago
You can put most of your crud and domain logic in a monolith, but if you have a GPU workload or something that has very different requirements - that should be its own thing. That pattern shouldn't result in 100 services to maintain, but probably only a few boundaries.
Bias for monolith for everything, but know when you need to carve something out as its own.
At scale, you're 100% correct.
convolvatron|9 months ago
demarq|9 months ago
jerf|9 months ago
I can't prove this scales up forever but I've been very happy with making sure that things are carefully abstracted out with dependency injection for anything that makes sense for it to be dependency-injected, and using module boundaries internally to a system as something very analogous to microservices, except that it doesn't go over a network. This goes especially well with using actors, even in a non-actor-focused language, because actors almost automatically have that clean boundary between them and the rest of the world, traversed by a clean concept of messages. This is sometimes called the Modular Monolith.
Done properly, should you later realize something needs to be a microservice, you get clean borders to cut along and clean places to deal with the consquences of turning it into a network service. It isn't perfect but it's a rather nice cost/benefit tradeoff. I've cut out, oh, 3 or 4 microservices out of monoliths in the past 5 years or so. It's not something I do everyday, and I'm not optimizing my modular monoliths for that purpose... I do modular monoliths because it is also just a good design methodology... but it is a nice bonus to harvest sometimes. It's one of the rare times when someone comes and quite reasonably expects that extracting something into a shared service will be months and you can be like "would you like a functioning prototype of it next week"?
roguecoder|9 months ago
The only way for significant architectural boundaries at team boundaries to not result in incredibly painful software, especially for a growing team, is to let the software organize the teams. Which means reorging the company whenever you need to refactor, and somehow guessing right about how many changes each component will need in the coming year.
It also means you can't have product and engineers explore a problem together, or manage by objective with OKRs since engineers aren't connected to business outcomes.
I know that all the ex-Amazonians are convinced this is the only way to build software, but it really, really isn't.
xcskier56|9 months ago
- You need to use a different language than your core application. E.g. we build Rails apps but need to use R for a data pipeline and 100% could not build this in ruby.
- You have 1 service that has vastly different scaling requirements that the rest of your stack. Then splitting that part off into it's own service can help
- You have a portion of your data set that has vastly different security and lifecycle requirements. E.g. you're getting healthcare data from medicare.
Outside of those, and maybe a few other edge cases, I see basically no reason why a small startup should ever choose microservices... you're just setting yourself up for more work for little to no gain.
Scarblac|9 months ago
shooker435|9 months ago
If you need to keep the lights or maintain an SLA and can do so by separating a concern, it can really reduce risk and increase speed when deploying new features on "less important" components.
Akronymus|9 months ago
hosh|9 months ago
> Microservices only pay off when you have real scaling bottlenecks, large teams, or independently evolving domains.
The BEAM language platform can cover scaling bottlenecks (at least within certain ranges of scale) and independently evolving domains, but has many of the advantages of working with a monolith when the team is small and searching for product-fit.
Like anything there are tradeoffs. The main one being that you'd have to learn how to write code with immutable data structures, and you have to be more thoughtful on how concurrent processes talk to each other, and what kind of failure modes you want to design into things. Many teams don't know how to hire for more Erlang or Elixir developers.
siliconc0w|9 months ago
utmb748|9 months ago
Agree with organizational win, also smaller merge requests in the team were superb.
Around 5-10 devs, monolith, we ran into conflicts more often, deployment, bigger merge requests, releasing by feature was problematic, microservices made team more productive, but rules about tests/docs/endpoints/code were important.
frollogaston|9 months ago
mikeocool|9 months ago
Though, if you’re on a small team and really want to use micro services two places I have found it to be somewhat advantageous:
* wrapping particularly bad third party APIs or integrations — you’re already forced into having a network boundary, so adding a service at the boundary doesn’t increase complexity all that much. Basically this lets you isolate the big chunk of crappy code involved in integrating with the 3rd party, and giving it a nice API your monolith can interact with.
* wrapping particularly hairy dependencies — if you’ve got a dependency with a complex build process that slows down deployments or dev setup — or the dependency relies on something that conflicts with another dependency — wrapping it in its own service and giving it a nice API can be a good way to simplify things for the monolith.
roguecoder|9 months ago
You can get the architectural benefits of microservices by using message-passing-style Object-Oriented programming. It requires the discipline not to reach directly into the database, but assuming you just Don't Do That a well-encapsulated "object" is a microservice that runs in the same virtual machine as the other mircoservices.
Java is the most mainstream language that supports that: whenever you find yourself reaching for a microservice, instead create a module, namespace the database tables, and then expose only the smallest possible public interface to other modules. You can test them in isolation, monitor the connections between them, and bonus: it is trivial to deploy changes across multiple "services" at the same time.
DarkNova6|9 months ago
no_wizard|9 months ago
For example, we have a authentication microservice at work. It makes sense that it lives outside of the main application, because its used in a multiple different contexts and the service boundary allows for it to be more responsive to changes, upgrades and security fixes than having it be part of the main app, and it deploys differently than the application. It also adds enough intentional friction that we don't accidentally put logic where it doesn't belong as part of the user authentication process. It has helped keep the code focused on only primary concerns.
That said, you can't apply any of these patterns blindly, as is so often the case. A good technical leader should push back when the benefits don't actually exist. The real issue is lack of experience making technical decisions on merits.
This includes high level executive leaders in the organization. At a startup especially, they are still often involved in many technical decisions. You'd be surprised (well maybe not!) how the highest leadership in a company at a startup will mandate things like using microservices and refuse to listen to anything running counter to such things.
[0]: https://en.wikipedia.org/wiki/Rule_of_thumb
esafak|9 months ago
zsoltkacsandi|9 months ago
> It also adds enough intentional friction that we don't accidentally put logic where it doesn't belong as part of the user authentication process.
Preventing misplaced logic is a matter of good code structure, well defined software development processes and team discipline - not something that requires splitting into a separate microservice, and definitely not something that you want to solve on system architecture level.
dkarl|9 months ago
In monoliths, they generally don't.
There's no logical reason why you couldn't pay as much attention to decomposition and API design between the modules of a monolith. You could have the benefit of good design without all the architectural and operational challenges of microservices. Maybe some people succeed at this. But in practice I've never seen it. I've seen people handle the challenges of microservices successfully, and I've never seen a monolith that wasn't an incoherent mess internally.
This is just my experience, one person's observations offered for what they're worth.
In practice, in the context of microservices, I've seen an entire team work together for two weeks to break down a problem coherently, holding off on starting implementation because they knew the design wasn't good enough and it was worth the time to get it right. I've seen people escalate issues with others' designs because they saw a risk and wanted to address it.
In the context of monoliths, I've never seen someone delay implementation so much as a day because they knew the design was half-baked. I rarely see anyone ask for design feedback or design anything as a team until they've screwed something up so badly that it can't be avoided. People sometimes make major design decisions in a split second while coding. What kind of self-respecting senior developer would spend a week getting input on an internal code API before starting to implement? People sometimes aren't even aware that the code they wrote that morning has implications for code that will be written later.
Theoretically this is okay because refactoring is easy in a monolith. Right? ... It is, right?
I'm basically sold on microservices because I know how to get developers to take design seriously when it's a bunch of services talking to each other via REST or grpc, and I don't know how to get them to take the internal design of a monolith seriously.
roguecoder|9 months ago
Every good monolith I've worked in (and I have worked in several, including one that was more than twenty years old) was highly-modular, well-designed with an easy-to-explain architecture.
The other thing they had in common was that code reviews talked about the aesthetics of the code and design, instead of just hunting for errors or skimming for security problems. It was relatively common to throw out the first proposed PR and start over, and that was fine because people were slicing the work small enough they were posting four to six PRs a week anyway.
It took the engineers at the company being willing to collaborate on the craft of software development and prioritize the long-term health of the code over short-term feature delivery. And the result of being willing to go a little bit slower day-to-day was that the actual feature delivery was faster than anywhere else I've ever worked.
Without a functioning professional culture, nothing is going to be great. But at least with microservices people do have to design an API at some point.
rho4|9 months ago
Not that I would ever want to give up our monolith, but we do experience the problems you point out.
bitcurious|9 months ago
1. You get to minimize devops/security/admin work. Really a consequences of using serverless tooling, but you land on a something like a microservices architecture if you do.
2. You get can break out work temporally. This is the big one - when you're a small team supporting multiple products, you often don't have continuity of work. You have one project for a few months, completely unrelated product for another few months. Microservice architectures are easier to build and maintain in that environment.
roguecoder|9 months ago
Each repo you create is one more set of Dependabot alerts you need to keep on top of.
codr7|9 months ago
What planet are you living on?
Ensorceled|9 months ago
In the Q&A after ward, another local startup CTO asked about problems their company was having with their microservices.
The successful CTO asked two questions: "How big is your microservices tooling team?" and "How big is your Dev Ops Team?"
His point was, if you're development team is not big enough to afford dedicated teams to tooling and dev ops, it's not big enough to afford microservices.
utmb748|9 months ago
monero-xmr|9 months ago
sitkack|9 months ago
I have played around with architectures like this, but I allowed the caller to patch in a dependent function in the call with those function overlay overrides were passed from function to function.
Apologies, used sst
alabastervlog|9 months ago
bob1029|9 months ago
One should consider if they can dive even deeper into the monolithic rabbit hole. For example, do you really need an external hosted SQL provider, or could you embed SQLite?
From a latency & physics perspective, monolith wins every time. Making a call across the network might as well take an eternity by comparison to a local method. Arguments can be made that the latency can be "hidden", but this is generally only true for the more trivial kinds of problems. For many practical businesses, you are typically in a strictly serialized domain which means that you are going to be forced to endure every microsecond of delay. Assuming that a transaction was not in conflict doesn't work at the bank. You need to be sure every time before the caller is allowed to proceed.
The tighter the latency domain, the less you need to think about performance. Things can be so fast by default that you can actually focus on building what the customer is paying for. You stop thinking about the sizes of VMs, who's got the cheapest compute per dollar and other distracting crap.
no_wizard|9 months ago
You could say this about almost any pattern, if you genuinely tried to make microservices work it could work in ~100% of cases, I'm sure of that.
Its this pattern of dismissing or accepting a solution with strong prejudice you don't evaluate the merits is the real problem. Thats the true behavior we need to get away from.
We as an industry may find, that modular monoliths trend toward the top as a result (I hate to speculate too much, every company is different and there are in fact other patterns of development beyond the two mentioned) but that would be a side effect if true. The real win is moving away from such prejudiced behavior
codr7|9 months ago
addisonj|9 months ago
What this article doesn't cover... and where a good chunk of my career has been, is when companies are driven to break out into services, which might be due to scale, team size, or becoming a multi-product company. Whatever the reason, it can kill velocity during the transition. In my experience, if this is being done to support becoming multi-product, this loss in velocity comes at the worst time and can sink even very component teams.
As an industry, the gap between what makes sense for startups and what makes sense for scale can be a huge chasm. To be clear, I don't think it means you should invest in micro-services on the off-chance you need to hit scale (which I think is where many convince themselves of) nor does it mean that you should always head to microservices even when you hit those forcing functions (scaling monoliths is possible!)
That said, modularity, flexibility, and easy evolution are super important as companies grow and I do really think the next generation of tools and platforms will be benefit to better suiting themselves to evolution and flexibility than they do today. One idea I have thought for some time is platforms that "feel" like a monolith, but are 1) more concrete in building firmer interfaces between subsystems and 2) have flexibility in how calls happen between these interfaces (imagine being able to run a subsystem embedded or transparently to move calls over an RPC interface). Certainly that is "possible" with well structured code in platforms today... but it isn't always natural.
I am not sure the answer, but I really hope the next 10 years of my career has less massive chasms crossed via huge multi-year painful efforts and more cautious, careful evolution enabled by well considered tool and platforms.
gleenn|9 months ago
https://github.com/polyfy/polylith
4ndrewl|9 months ago
Microservice architecture is a deployment strategy.
If you have a problem with deployments (eg large numbers of teams, perhaps some external suppliers running at different cadences, or with different tech stacks) the microservices are a fine solution to this.
andreygrehov|9 months ago
alaithea|9 months ago
rglover|9 months ago
1. Start with a monolith
2. If necessary, set up a job server that can be vertically/horizontally scaled and then give it a private API, or, give it access to the same database as the monolith.
For an overwhelming number of situations, this works great. You separate the heavy compute workloads from the customer-facing CRUD app and can scale the two independent of one another.
The whole microservices thing always seemed like an attempt by cloud providers to just trick you into using their services. The first time I ever played with serverless/lambda, I had a visceral reaction to the deployment process and knew it would end in tragedy.
parpfish|9 months ago
My current job insists that they have a “simple monolith” because all the code is in a single repo. But that repo has code to build dozens of python packages and docker containers. Tons of deploy scripts. Different teams/employees are isolated to particular parts of the codebase.
It feels a lot like microservices, but I don’t know what the defining feature of microservices is supposed to be
shooker435|9 months ago
Which honestly may be the future if LLMs stay in a dev's toolkit. Plugging in an AI model to a monorepo provides so much context that can't be easily communicated across microservices in separate repos.
phodge|9 months ago
For example you may be forced to split out some components into separate services because they require a different technology stack to the monolith, but that doesn't strictly require a separate source code repository.
johncoltrane|9 months ago
karmakaze|9 months ago
- Use one-way async messaging. Making a UserService that everything else uses synchronously via RPC/REST/whatever is a very bad idea and an even worse time. You'll struggle for even 2-nines of overall system uptime (because they don't average, they multiply down).
- 'Bounded context' is the most important aspect of microservices to get right. Don't make <noun>-services. You can make a UserManagementService that has canonical information about users. That information is propagated to other services which can work independently each using the eventually consistent information they need about users.
There's other dumb things that people do like sharing a database instance for multiple 'micro'-services and not even having separately accessible schemas. In the end if done well, each microservice is small and pleasant to work on, with coordination between them being the challenging part both technically and humanly.
bossyTeacher|9 months ago
hereonout2|9 months ago
jmyeet|9 months ago
Every service boundary you have to cross is a point of friction and a potential source of bugs and issues so by having more microservices you just have more than go wrong, by definition.
A service needs to maintain an interface for compatibility reasons. Each microservice needs to do that and do integration testing with every service they interact with. If you can't deploy a microservice without also updating all its dependencies then you don't have an independent service at all. You just have a more complicated deployment with more bugs.
The real problem you're trying to solve is deployment. If a given service takes 10 minutes to restart, then you have a problem. Ideally that should be seconds. But more ideally, you should be able to drain traffic from it then replace it however long it takes and then slowly roll it out checking for canary changes. Even more ideally, this should be largely automated.
Another factor: build times. If a service takes an hour to compile, that's going to be a huge impediment to development speed. What you need is a build system that caches hermetic artifacts so this rarely happens.
With all that above, you end up with what Google has: distributed builds, automated deployment and large, monolithic services.
bilbo-b-baggins|9 months ago
Do you have standardization and reuse of things like linting, formatting, ci/cd pipelines, version stability, deployment patterns, monitoring integrations, integration and end to end testing, etc.? If you’re doing those things bespoke per repo/deployment, or if you don’t have roles dedicated to the support and maintenance, you’re not going to have a good time with microservices.
Do you have actual issues of scale where API hot paths are dominating your runtime? Are they horizontally scalable or bottlenecked on downstream dependencies (databases)? You can’t solve scale issues by just spinning microservices willy nilly (e.g. by domain topic).
Is your development environment sophisticated enough to actually run a stack? Or do you have supporting clusters that allow for local binding of services? If not, you’re going to struggle with microservice local development, and pay for a slow QA in staging.
Does all that require supporting roles and expertise? Yeap. If you’re a 5 person startup you probably don’t have that. If you’re a 150 person startup, you might.
I’ve seen Java monoliths with 11M lines of code that represented 80% of the production cost to run and the gradual break out of targeted APIs to microservices halved that while the monolith still lived on. I’ve seen queued microservice architectures ripping through tens of millions of events/requests a minute with less than a thousand pods (across the services) and a fraction of the resources of the monolith.
Ultimately there’s no free lunch in software and you shouldn’t pursue any design without understanding the tradeoffs.
gad21034|9 months ago
- monolith to start with very little time spent on code architecture patterns like DDD (although these days with llms I would say go for it and use DDD patterns in your prompts)
- optimize code cleanliness by adhering to better code architecture patterns
- when it feels like you are doing weird things to scale a process on the monolith (scheduling background tasks where you could break out a pubsub to function service and defend your uptime while coordinating on a shared DB), drop any religion around no micro services.
CharlieDigital|9 months ago
I think many teams do not have a practical guide on how to achieve this. Certainly, Google's solution in this case is far too complex for most teams. But many teams can achieve the 5 core benefits that they mentioned with a simpler setup. I wrote a about this in a blog post A Practical Guide to Modular Monoliths with .NET[1] with a GitHub repo showing how to achieve this[2] as well as a video walkthrough[3]
This approach has proven (for me) to be easy to implement, package, deploy, and manage and is particularly good for startups with all of the qualities mentioned in the Google paper without much complexity added.
[0] https://dl.acm.org/doi/pdf/10.1145/3593856.3595909
[1] https://chrlschn.dev/blog/2024/01/a-practical-guide-to-modul...
[2] https://github.com/CharlieDigital/dn8-modular-monolith
[3] https://www.youtube.com/watch?v=VEggfW0A_Oo
PathOfEclipse|9 months ago
The article does mention "invest in modularity", but to be honest, if you're in frantic startup mode dumping code into a monolith, you're probably not caring about modularity either.
Lastly, I would imagine it's easier to start with microservices, or multiple mid-sized services if you're relying on advanced cloud infra like AWS, but that has its own costs and downsides.
duxup|9 months ago
For large orgs where each service has a dedicated team it starts to make sense... but then it becomes clear that microservices are an organizational solution.
haburka|9 months ago
utmb748|9 months ago
CI/CD - infra can be as code, shared across, K8s port-forward for local development, better resource utilization, multiple envs end so on, available tooling, if setup correctly, usually keeps working.
Not mentioned plus, usually smaller merge requests, feature can be split and better estimated, less conflicts during work or testing... possibility to share in packages.
Also if there are no tests, doesnt matter if its monorepo or MS, you can break easily or spend more time.
You should afford tests and documentation, keep working on tech debt.
Next common issue I see, too big tech stack cos something is popular.
Cthulhu_|9 months ago
The other one was a microservice architecture in front of the real problem, a Java backend service that hid the real real problem, one or more mainframes. But the consultants got to play in their microservices garden, which was mostly just a REST API in front of a Postgres database that would store blobs of JSON. And of course these microservices would end up needing to talk to each other through REST/JSON.
I've filed this article in my "microservices beef" bookmarks folder if I ever end up in another company that tries to do microservices. Of course, that industry has since moved on to lambdas, which is microservices on steroids.
root_axis|9 months ago
frollogaston|9 months ago
vjvjvjvjghv|9 months ago
metalrain|9 months ago
abhisek|9 months ago
Few cases where microservices makes sense probably when we have a small and well bounded use-case like webhooks management, notifications or may be read scaling on some master dataset
mamidon|9 months ago
bzmrgonz|9 months ago
lenerdenator|9 months ago
mvdtnz|9 months ago
(It's no coincidence that this company was largely loaded up with ex-Googlers in the early days).
cedws|9 months ago
ngrilly|9 months ago
yawnxyz|9 months ago
Using them makes it easy to build endpoints for things like WhatsApp and other integrations
stevebmark|9 months ago
Love this quote, it should be a poster on the wall of any dev who pushes Domain Driven Design on an engineering team.
nottorp|9 months ago
The catch is to keep them all in mind and use them in moderation.
Like everything else in life.
sisve|9 months ago
Context and nuances
mountainriver|9 months ago
Just use regular sized services
sergiotapia|9 months ago
it takes skill and taste to use only enough of each. unfortunately a lot of VC $$$ has been spent by cloud companies and a whole generation or two of devs are permasoiled by the micro$ervice bug.
don't do it gents. monolith, until you literally cannot go further, then potentially, maybe, reluctantly, spin out a separate service to relieve some pressure.
gavmor|9 months ago
While I agree with you regarding microservices (eg language abstractions provide 80% of the encapsulation SOA provides for 20% of the overhead) and I readily acknowledge that 100% test coverage is a quixotic fantasy, I really can't imagine writing reliable software without debuggers, print-statements, or a REPL—all of which TDD replaces in my workflow.
How, I wonder, do you observe the behavior of the program if not through tests? By playing with it? Manually reproducing state? Or, do you simply wait until after the program is written to test its functionality?
I wonder what mental faculties I lack that facilitate your TDD-less approach. Can it be learned?
pydry|9 months ago
Like TDD, theyre great if done in the right way for the right reasons.
Havoc|9 months ago
Stuff like k8s works fine as docker delivery vehicle
goji_berries|9 months ago
mgaunard|9 months ago
alaithea|9 months ago
I love it when all my CRUD has to be abstracted over HTTP. /s
nicman23|9 months ago
demarq|9 months ago
httpz|9 months ago
swisniewski|9 months ago
I think this is the wrong way to frame it. The advice should be "just do the scrappy thing".
This distinction is important. Sometimes, creating a separate service is the scrappy thing to do, sometimes creating a monolith is. Sometimes not creating anything is the way to go.
Let's consider a simple example: adding a queue poller. Let's say you need to add some kind of asynchronous processing to your system. Maybe you need to upload data from customer S3 buckets, or you need to send emails or notifications, or some other thing you need to "process offline".
You could add this to your monolith, by adding some sort of background pollers that read an SQS queue, or a table in your database, then do something.
But that's actually pretty complicated, because now you have to worry about how much capacity to allocate to processing your service API and how much capacity to allocate to your pollers, and you have scale them all up at the same time. If you need more polling, you need more api servers. It become a giant pain really quickly.
It's much simpler to just separate them then it is to try to figure out how to jam them together.
Even better though, is to not write a queue poller at all. You should just write a Lambada and point it at your queue.
This is particularly true if you are me, because I wrote the Lambda Queue Poller, it works great, and I have no real reason to want to write it a second time. And I don't even have to maintain it anymore because I haven't worked at AWS since 2016. You should do this to, because my poller is pretty good, and you don't need to write one, and some other schmuck is on the hook for on-call.
Also you don't really need to think about how to scale at all, because Lambda will do it for you.
Sure, at some point, using Lambda will be less cost effective than standing up your own infra, but you can worry about that much, much, much later. And chances are there will be other growth opportunities that are much more lucrative than optimizing your computer bill.
There are other reasons why it might be simpler to split things. Putting your control plane and your data plane together just seems like a head ache waiting to happen.
If you have things that happen every now and then ("CreateUser", "CreateAccount", etc) and things that happen all the time ("CaptureCustomerClick", or "UpdateDoorDashDriverLocation", etc) you probably want to separate those. Trying to keep them together will just end up causing your pain.
I do agree, however, that having a "Users" service and an "AccountService" and a "FooService" and "BarService" or whatever kind of domain driven nonsense you can think of is a bad idea.
Those things are likely to cause pain and high change correlations, and lead to a distributed monolith.
I think the advice shouldn't be "Use a Monolith", but instead should be "Be Scrappy". You shouldn't create services without good reason (and "domain driven design" is not a good reason). But you also shouldn't "jam things together into a monolith" when there's a good reason not to. N sets of crud objects that are highly related to each other and change in correlated ways don't belong in different services. But things that work fundamentally differently (a queue poller, a control-plane crud system, the graph layer for grocery delivery, an llm, a relational database) should be in different services.
This should also be coupled with "don't deploy stuff you don't need". Managing your own database is waaaaaaay more work that just using Dynamo DB or DSQL or Big Table or whatever....
So, "don't use domain driven design" and "don't create services you don't need" is great advice. But "create a monolith" is not really the right advice.
codinhood|9 months ago
I think this hits the nail on the head. People are trying to find the "one true way" for microservices vs monoliths. But it doesn't exist. It's context dependent.
It's like the DRY vs code duplication conversation. Trying to dictate that you will never duplicate code is a fool's errand, in the same way that duplicating code whenever something is slightly different is foolish.
Context is everything
mannyv|9 months ago
If you don't understand the benefit of xyz then don't do it.
Our microservice implementation is great. It scales with no maintenance, and when you have three people that makes a difference.
mattbillenstein|9 months ago
unknown|9 months ago
[deleted]
unknown|9 months ago
[deleted]
brentdolby13|9 months ago
[deleted]
selfselfgo|9 months ago
[deleted]
sfcurryniggers|9 months ago
[deleted]