I really do not understand the debate on monoliths and microservice anymore.
Context matters so much. Should you have absolutely everything in 1 system. No. And I think no one thinks that anymore.
Should you spilt your system into as many pieces as possible?
No, of course not.
You are prop. storing files one place and have a data in a database another place. And most likely none are on the webserver receiving request. Or maybe you are, because context matters and I'm wrong in your particular case. Could be.
My advice would be to split your application up into enough pices so that;
1.your engineers feels like they have control over the code and are not afraid of doing changes.
2. if you get uptime issues. Get the heavy / unrealiable services over to new node. Don't let one service/endpoint take down other endpoints.
Think and act according to the problems you have in front of you. Don't go extreme in any direction. Context context context
Like any other well reasoned, balanced and pragmatic POV, the problem with your boring approach is you can’t wrap it in a clickbaitable blog and no flame war can emerge from it.
I agree completely. The current FAA outage is a good example of this. What if the system responsible for NOTAMs was the same system responsible for sending system outage messages. They weren’t so that status could be communicated while mitigation was done on a system with unrelated concerns.
At the same time, a few person startup should probably focus on what allows them to deliver the fastest. I’ve seen that work with relatively monolithic systems and with SOA, tooling choice makes a huge impact.
You're right, and I think most engineers know it. The debate persists as a proxy for a separate debate:
E > I want to reorganize this because it's ugly and hard to work with as it is
M > I want you focusing on this list of features/bugs and not introducing risk by making changes for other reasons
Something doesn't feel right about:
E > I'm the expert, you have to take my word that this change is necessary
So if depending on whether E wants to join or separate, they put on their I-hate/heart-microservices hat and rehash whichever side of the tired old debate serves them.
People who agree or disagree dust off their I-hate/heart-microservices hats, and we go to town for a while.
We're not actually arguing for micro vs macro, we're talking about the context specific details, we just dress it up in those terms because M is in this meeting and M isn't close enough to the code to keep up with a conversation about the details.
If enough energy is not expended in this process, whoever lost the debate goes and writes a blog about why they were right. Except nobody will read a blog about their specific codebase, so it ends up being about whether they heart/hate microservices.
When discussing turning our monolith into microservices at a previous job:
My Boss: "I'm convinced this is the right architecture."
Also my boss: "Now how do we break this app up?"
He was certain we needed to shatter our monolith into lots of little pieces, but had no clear vision as to what those pieces would be. From my point of view, the "architecture" he was certain of wasn't an architecture at all. It was just a general notion of doing what he thought everyone else was doing without considering anything about our app. Total cargo cult mentality.
Insnt the problem that nobody have ever developed an complex system using modern programming languages where
"engineers feels like they have control over the code and are not afraid of doing changes."
And that trying to have it as a goal only results in the project getting fragmented in a messy ball of micro-components(services/classes/libraries) with complex inter dependencies that nobody feels like they have control over the code and are not afraid of doing changes to?
It's almost as if there is no real solution to complexity that avoids having to deal directly with the complexity inherent to a problem domain, and that means developing documentation and testing that allows people to touch/change scary systems.
Agreed. I very much like my workplaces approach atm, which I jokingly call medium-sized services, mostly.
Due to acquisitions, we have 6 or 7 big messy monoliths and actually, 3-5 infrastructure stacks. This in turn means, we have something like 12 different user management systems (because each monolith contains a few layers of legacy user management, it'd be boring otherwise), 4-5 different file and image stores (some products just don't have one), 4-5 different search implementations. It's a bit of a messy zoo.
Our path out of that mess is to either extract, or re-implement functionality common to the products in smaller scale, standalone services - for example so you end up with a centralized user management, or a centralized attachment system. And these things certainly aren't micro, user management does a bit of search, a bit of SAML, a lot of OIDC, some CRUD for users, groups and such. If you wanted to be silly, this could be 5-6 "micro-services".
But realistically, why? We do gain advantages by having this in a smaller service - we can place a bit of extremely critical functionality in a small service and we can manage that service very, very gently. And we can reduce redundant efforts, even if migrations to this service takes some work. But what would some dedicated SAML-integration-spring-app improve, besides moving 1-2 tables into another database/schema?
> I really do not understand the debate on monoliths and microservice anymore.
> Don't go extreme in any direction. Context context context
That is exactly why debate occurs. All religions, be it microservices or TDD, are taking some good ideas to an extreme and removing context from the decision making process.
Otherwise the term "microservices" wound never be invented: people were doing "services" since the dawn of time. But you can't create hype out of common sense, you have to go to the extremes.
> My advice would be to split your application up into enough pices so that;
> 1.your engineers feels like they have control over the code and are not afraid of doing changes.
I'd add: split your team up into a similar number of pieces. Your application will come to resemble the shape of your teams, over time. Design your teams based on where you want the boundaries of your application to fall.
Agreed. I would add an extra thought around "Try to go monolith until things really breakdown" but you get at that in so far as the point where things breakdown is where either:
a) engineers are afraid of or find making changes difficult
b) uptime issues result in heavy endpoints/services taking down others
To your point, microservices should be introduced expressly to solve those two problems, and not before.
I tend to split things up based on the resources required.
We had a decent sized data pipeline that was entirely microservice and serverless and it was a real joy to work with.
- Our NLP code lived in a service that ran on GPUs
- our ingest service used a high ram, but relatively simple CPU service to do in-memory joins cheaply and efficiently
- we had a bunch of specialized query services that were just direct requests to AWS services, or a light lambda wrapper around a call to a service.
Coordinated things with airflow and it was very easy to maintain and scaling was pretty efficient since we just scaled the pieces that needed it without wasting money on unneeded compute.
> enough pieces so that; 1. your engineers … are not afraid of doing changes.
I don’t really understand that argument, and I don’t really feel safer making a change to a microservice inside a large system, as opposed to making a change to a monolith — the consequences of a mistake are equal in both cases (although harder to observe/debug in a microservice architecture) — am I missing something?
Yeah, "it depends" is almost always the right answer but not a very useful one. The devil is in the details, and I don't see any issue in discussing the nuances so people can apply them to their own situation to make their own decision.
One decent rule of thumb is to have one service per 1-3 closely grouped SWEs, like Conway's Law, then likely split them up more to ensure no two services share a database.
No, they aren't. The entire point of the big ball of mud is that there are no meaningful divisions in the code. Everything uses everything willy-nilly, at the smallest possible level of abstraction. There is, metaphorically if not always entirely literally, not a single line of code in the system that you can change without fear of bringing something else down that you may not have even known they existed.
Microservices are not a miracle cure or the solution to every problem, but they do force divisions within the code base. Every microservice defines an interface for its input and its output. It may be the sloppiest, crappiest definition every, with dynamic types and ill-defined APIs and bizarre side effects, but it is some sort of definition, and that means if necessary, the entire microservice could be entirely replaced with some new chunk of code without affecting the rest of the system, cleanly cut along the API lines. This microservice may sloppily call dozens of others, but that can be seen and replicated. It may be called by a sloppy combination of other services, but the incoming API could be replicated.
However bad the architecture of the microservice may be, however bad the overall architecture of the microservice-based system as a whole may be, this will be true by structural necessity. The network layer defines some sort of module.
They can create big balls of spaghetti, certainly. They can in total create a big mess; they are not magical architectural magic by any means. While a full replacement of a given microservice is practical and possible, if the boundaries are not drawn correctly to start with, fixing that can be much harder in a microservice architecture (with corresponding separation of teams) than a monolith.
But they fail to create the situation that is what I would consider the distinguishing characteristic of a "Big Ball of Mud", where there are no partitions between anything at all. Big Balls of Mud have no equivalent of "replace this microservice". Microservices by necessity, to be "microservices", have partitions.
> It may be the sloppiest, crappiest definition every, with dynamic types
Funny you said that since a microservice API is always dynamically typed and its usage cannot be checked by compiler. And the more microservices you use the more dynamically typed the whole project gets overall.
While you can opt into a free for all everything importing everything, all languages do also support creating modules which define APIs for consumer and maintain compile time type checking.
> No, they aren't. The entire point of the big ball of mud is that there are no meaningful divisions in the code. Everything uses everything willy-nilly, at the smallest possible level of abstraction. There is, metaphorically if not always entirely literally, not a single line of code in the system that you can change without fear of bringing something else down that you may not have even known they existed.
This is bit of a strawman. The equivalent strawman criticism for microservices is that every function runs in its own networked service. Is that truly representative of the reality? Of course not, and neither is your breakdown of big ball of mud.
The main reason is to stop crap developers using globals all over the place, or passing around hash maps stuffed full of config that aren't clearly defined in one place just continually mutated (and similar bad issues).
Still, crap devs will just find their own ways to mess up with microservices, but at least they limit the blast radius.
"The entire point of the big ball of mud is that there are no meaningful divisions in the code.... Everything uses everything willy-nilly....not a single line of code in the system that you can change without fear of bringing something else down that you may not have even known they existed"
...then OK, perhaps the developers who caused the state you describe above would not cause the exact same problems with microservices -- but will they really move fast any not cause a mess given ANY kind of environment?
The state you describe is not normal of monoliths by any stretch.
It may be normal of old legacy systems with 5 generations of programmers on it. Also I believe microservices will have other kind of problems, but still deep problems, after 5 generations of programmers.
If preventing people from running in the completely wrong direction of the goal is your main concern -- why even be in the race at that point. Find new people to work with.
If you personally had success rewriting a Ball of Mud into microservices, consider if perhaps the "rewrite" is the important word (as well as quality of developers involved), not whetyer the refactor was to a new monolith or new microservices.
Microservices with boundaries drawn wrong can cause you to need to spend 20 programmers do the job of 1 programmer. Perhaps the mud looks different from a Big Ball, but it is still mud.
> if the boundaries are not drawn correctly to start with, fixing that can be much harder in a microservice architecture (with corresponding separation of teams) than a monolith.
Therein lies the problem. Nobody draws these boundaries correctly on the first try, and the correct boundaries can shift rapidly over time as new features are added or requirements change.
> Microservices are not a miracle cure or the solution to every problem, but they do force divisions within the code base.
Do they? The code itself may be in entirely separate repos but still be tightly coupled. Monoliths can have cleanly separated libraries/modules, those modules built from separate repos or at the very least, different namespaces.
The "macroservices" I've been seeing are many separate containers all sharing at least one data store. So they have all of the disadvantage of the "ball of mud" monolith combined with all of the disadvantage of much more complicated infrastructure. Yet the people working on them think they're "doing microservices" because k8!
The microservice separation is not just code in separate repos. It's also everything else behind the kimono - keep that kimono clasped tightly!
Microservices is a team organization technique, whereby disparate teams only communicate by well defined APIs. Any technology choices that come out of that are merely the result of Conway's Law.
Any time you lean on code in a random GitHub repository, where you never speak to the author and just use the API you're given, you're doing microservices. This works well enough so long as the product does what you need of it.
The problem is that when the product doesn't do what you need. If the microservices teams are under the same organization umbrella there is a strong inclination to start talking to other teams instead of building what's needed in house, which violates the only communicate by well defined APIs. This is where the ball of mud enters.
If your organization is such that you can call up someone on another team, you don't need microservices. They're for places so big that it is impossible to keep track of who is who and your coworkers may as well be some random GitHub repository.
> Any time you lean on code in a random GitHub repository, where you never speak to the author and just use the API you're given, you're doing microservices. This works well enough so long as the product does what you need of it.
No. That's not what a microservice is.
I understand you are trying to draw analogies, but a library is not a considered a microservice.
> Any time you lean on code in a random GitHub repository, where you never speak to the author and just use the API you're given, you're doing microservices.
There is a certain kind of "freedom" that is really slavery, but people feel so free when they hear about it is they often squee and hurt themselves with uncontrolled movements.
Microservices can be that way. Now that you have 25 different services in 25 different address spaces you can write them in 11 different languages and even use 4 versions of Python and 3 versions of Java. (I got driven nuts years ago in a system that had some Java 6 processes and some Java 7 processes and it turned out the XML serialization worked very differently in those versions.)
If you want to be productive with microservices you have to do the opposite: you have to standardize build, deployment, configuration, serialization, logging, and many "little" things that are essential but secondary to the application. If a coder working on service #17 has to learn a large number of details to write correct code they are always going to be complaining they are dealing with a "large ball of mud". If those little things are standardized you can jump to service #3 or #7 and not have it be a research project to figure out "how do i log a message?"
Don’t you find it weird that everybody else is either over or underengineering, but you, you engineer things exactly the right amount?
I bet when you’re driving, you also tend to notice that everyone else is either an idiot driving way too slow in the middle lane or a maniac speeding past you. Nobody else drives as well as you do.
It must be exhausting for these people to live in a world surrounded by strawmen, while they alone have achieved perfection. Which they will detail how they do in a later blogpost.
Or as I like to say "Oh, you have a big ball of mud in your monolith because of poor design and want to move to micro-services?"..."now you have n^n big balls of mud" Poor design is poor design, adding more complexity just makes it a more complicated poor design.
I think most places rearchitect to microservices because it’s the new shiny. They don’t do the engineering necessary to create a detailed cost/benefit analysis, they just feel it will be better and so they jump in.
For the same reasons the companies don’t do the cost/benefit analysis they don’t spend much time thinking about how they could benefit from rearchitecting their monolith into various libraries, modules, packages and interfaces.
Because they don’t think much about these code boundaries, they end up turning their monolith into a distributed monolith. In doing so they don’t get the major benefits microservices are meant to provide, such as independent code deployment. They also lose the benefits of a monolith, such as less ancillary complexity. This situation is the norm and is evidenced by “deployment parties” where you can’t just deploy one microservice because 11 of them need to go to prod together.
What I have seen a lot of over the past few years is a push to get off main frames and into the cloud. This is a valid driver for rearchitecting but microservices are just one of a number of solutions as the cloud is very flexible these days.
I assert that a lot of rearchitecting to microservices can be attributed to the fact that our industry, as Alan Kay has said, is a Cargo Cult.
I'm always sceptical on big claims for or against specific architectural and infrastructural choices.
Micro-services makes sense in specific cases and doesn't in some other, the same as monolith is an absolute no-go in some cases but a really good fit for some others.
The correct choice is always the simplest for what you need, the tricky bit is understanding what you actually need. The right choice might be a complex solution because your needs require some complexity.
Going for micro-services just for the sake of it, without the need for it is a bad choice, but it doesn't mean that micro-services are bad.
Microservices - let’s replace as many interfaces as possible with the slowest, flakiest, most complex mechanism - the network layer. Why call a function when you can wrap that function in an entire application and call it via API? Why have a single database when we can silo our data across 200 mini databases? Why have a single repo when we can have 200 tiny repos?
I've found the nuance is in the middle somewhere. We've all seen the madness with web scale infrastructure for a personal blog, but one gigantic compilation unit will eventually bite you in the ass too (i.e. rebuilds get very slow).
What you probably want is something where everything lives in the same repository, but as separate modules/dlls which can be included in some common execution framework the team previously agreed upon.
If you have something approximating microservices-as-dlls, then you are essentially eating cake while having cake when you really think about it. Function calls are still direct (sometimes even inlined), but you could quickly take that same DLL and wrap it with a web server and put it on its own box if needed.
Establishing clear compilation unit boundaries without involving network calls is the best path for us, and I suspect it's the best path for anyone to start with. We take this "don't involve the network" philosophy into our persistence layer too. SQLite is much easier to manage compared to the alternatives.
Start with a monolith, that will take you VERY far.
When the organization gets big enough (AND ONLY THEN), add an additional domain oriented service. FULLY implement deployment and infra. Only once you do that can you think about adding another (using the pattern you just built out).
Micro-monoliths.
Organizations explode the number of services, half ass the infrastructure (the hard part of microservices), and then crumble under the organizational complexity.
We went from "services should be no more than 100 lines of code" to "testing and maintaining thousands of interconnected microservices is TERRIBLE IT TURNS OUT".
The secret here is that all simple answers are wrong. Your services are too small and too big at the same time. Finding balance is hard. Zen Buddhists call it the "Middle Way".
The best design is always an uneasy intersection of many approaches and concerns, and also it's the concept of what you decide NOT to do, so you have more resource to focus on what TO DO.
Also we keep overanalyzing how we do services in isolation, when the complexity comes not from each of them alone, but how they interact. To solve this complexity you need clear, aligned flows. More like laminar flow. Less like turbulence.
Trying to keep it simple (to a degree). First of all, I would say that its OK to still have a monolith and even build a whole product as a monolith and break it down as the need for microservices arises.
My understanding of whether or not you should take a monolith and cut it into pieces is that it depends on what you want to achieve.
Every monolith is specific, or are they?
Without knowing what your product does I bet you have an API, a UI layer or two, some business logic and maybe throw emailing or a payment service. Well, guess what? We all have those!
How to decide.
For myself, I’ve tried to boil it down to 3 questions:
1. Will I need to scale this part of the monolith more (often) than others?
2. Does this part of the monolith handle an entire process on its own from start to finish?
3. Does this part of the monolith require a lot of different code or resources than the other parts?
The questions are simple. They aren't philosophical. They don’t have a hidden meaning. Rather, a series of simple booleans. If something needs to be a microservice it'll most likely hit 3 out of 3 of those.
The discussion of Microservices vs. Monolith feels a lot like NoSQL vs. Relational one. That is to say, Microservices are a bad idea right up to the point where monoliths won't work.
Most services can be successfully implemented with monoliths (and relational DBs for that matter). Only when that solution doesn't scale anymore, that's when microservices come in handy. Particularly when a large service has core functionality that must always run and secondary functionality that can tolerate higher rates of failure.
I think the problem with microservices is the same as the problem with OO programming (and I say this as a pure OO Rubyist) ... what you are doing is shifting the complexity out of your code, where at least it's under source control and (hopefully) readable. And moving it into the order and timing of the interactions between your services/objects - which isn't readable unless you start hunting through log files.
I've been on both sides of this debate. I've seen codebases from teams that want monorepos and macroservices that fit all of the falling criteria:
- The codebase had three primary responsibilities
- None of those functions overlapped in functionality and didn't share any significant code
- They were all written by different people with subtly different styles
- They used infrastructure code for talking to third party services in subtly different ways that made upgrading dependencies difficult
In theory, they were all within the same business domain, so the types that think one business domain equals one service clumped them together. This made little sense.
On the opposite side, I've seen microservices where all the little services depended on one another in complicated ways that made them all a brittle mess.
Finding the right solution to each problem is always the real challenge.
Use the best tool for the job. It's stupid to think of monoliths vs microservices. You can use both if the problem requires it.
For example I'm currently working on an audio hosting service. The main app is a monolith where 90% of the code resides but there are a couple of ancillary services.
Audio encoding (which is heavily CPU bound) is a serverless microservice that can scale up and down as needed. Users don't upload content constantly, but when they do, you want to be able to encode stuff concurrently without blocking the main app. Audio streaming is also a serverless microservice because hey for every user uploading content you can have 1000x consumers (or more).
What a waste of time. Anyone who's not an architect or developer, but regularly works with architects and developers, intuitively reaches these conclusions. And this has been going on and on for almost ten years.
When you watch this from the outside (e.g., let's say, as a consultant called to advise on a very specific aspect of software architectures) it feels like they all follow a secret agreement that instructs them to go full microservices route. Questioning this, even as a consultant paid to do exactly that, is considered unacceptable. It's like questioning your client's religious or political beliefs.
I observe a similar trend with CIOs hired to help institutions digitally transform themselves. Many operate on an "innovative" reasoning that consists in going full cloud and lay off IT personnel, which inevitably leads to increased operating and maintenance costs without exploiting the actual benefits of cloud computing. But it's already too late, Mr/Mrs CIO has already left the org when this happens. And self-congratulating words are already published in their LinkedIn profile.
I often have two thoughts when I attend a pre-sales meeting with a prospect customer that shows us a beautiful microservices architecture:
1. Oh my...
2. Shut up and just take the money they are throwing at your face.
Having run k8s and _classical_ microservices before, I am now in a much happier place just using the AWS serverless suite (lambda, API GW, CF, SNS, SQS, eventbridge, dynamodb, etc).
Is my setup "microservices"? Well, maybe, depending on your definition, but, in truth I don't really care - it works pretty well.
We also do "DDD" with it and have multiple AWS accounts with these marking the domain borders. Comms between the accounts is via eventbridge or (very rarely) inter-account API invocation.
This allows many of the benefits of microservices, without the pain of dealing with k8s. Clean separation of domains, reduced cognitive loads for teams, each of which looks after all the stuff in a single account (so-called feature teams, where each team designs/manages and runs everything in that account/domain).
The hard bit was defining the domain borders, and the inter-domain protocols/interactions, but, once this is well defined, things work pretty well.
Having come from a k8s world, this setup feels so much nicer, and lighter, and easier to get stuff built in a both fast and performant way.
It would be interesting to see trends away from _classical_ "cloud-native" (k8s) setups with microservices, to true serverless setups. I wonder how much of k8s's lunch serverless has managed to eat so far.
If you can't get your monolith right you probably won't get your microservices right. Microservices do come with additional overhead in terms of infrastructure, integrations, and so on. There's a concept of microservices readiness (e.g. https://learn.microsoft.com/en-us/azure/architecture/guide/t...). Many organisations aren't ready to embrace microservices, and if they get into microservices before they're ready, then it's a lot of pain. There's also this misconception that microservices must be nano-services. But that's not a problem with microservices architecture, it's a problem of using microservices anti-patterns. As with everything in technology, there's no universally unique solution to all problems - everything's context-specific.
[+] [-] sisve|3 years ago|reply
Context matters so much. Should you have absolutely everything in 1 system. No. And I think no one thinks that anymore.
Should you spilt your system into as many pieces as possible?
No, of course not.
You are prop. storing files one place and have a data in a database another place. And most likely none are on the webserver receiving request. Or maybe you are, because context matters and I'm wrong in your particular case. Could be.
My advice would be to split your application up into enough pices so that;
1.your engineers feels like they have control over the code and are not afraid of doing changes.
2. if you get uptime issues. Get the heavy / unrealiable services over to new node. Don't let one service/endpoint take down other endpoints.
Think and act according to the problems you have in front of you. Don't go extreme in any direction. Context context context
[+] [-] avip|3 years ago|reply
[+] [-] jonhohle|3 years ago|reply
At the same time, a few person startup should probably focus on what allows them to deliver the fastest. I’ve seen that work with relatively monolithic systems and with SOA, tooling choice makes a huge impact.
[+] [-] __MatrixMan__|3 years ago|reply
E > I want to reorganize this because it's ugly and hard to work with as it is
M > I want you focusing on this list of features/bugs and not introducing risk by making changes for other reasons
Something doesn't feel right about:
E > I'm the expert, you have to take my word that this change is necessary
So if depending on whether E wants to join or separate, they put on their I-hate/heart-microservices hat and rehash whichever side of the tired old debate serves them.
People who agree or disagree dust off their I-hate/heart-microservices hats, and we go to town for a while.
We're not actually arguing for micro vs macro, we're talking about the context specific details, we just dress it up in those terms because M is in this meeting and M isn't close enough to the code to keep up with a conversation about the details.
If enough energy is not expended in this process, whoever lost the debate goes and writes a blog about why they were right. Except nobody will read a blog about their specific codebase, so it ends up being about whether they heart/hate microservices.
[+] [-] Zaskoda|3 years ago|reply
My Boss: "I'm convinced this is the right architecture."
Also my boss: "Now how do we break this app up?"
He was certain we needed to shatter our monolith into lots of little pieces, but had no clear vision as to what those pieces would be. From my point of view, the "architecture" he was certain of wasn't an architecture at all. It was just a general notion of doing what he thought everyone else was doing without considering anything about our app. Total cargo cult mentality.
[+] [-] Stranger43|3 years ago|reply
"engineers feels like they have control over the code and are not afraid of doing changes."
And that trying to have it as a goal only results in the project getting fragmented in a messy ball of micro-components(services/classes/libraries) with complex inter dependencies that nobody feels like they have control over the code and are not afraid of doing changes to?
It's almost as if there is no real solution to complexity that avoids having to deal directly with the complexity inherent to a problem domain, and that means developing documentation and testing that allows people to touch/change scary systems.
[+] [-] tetha|3 years ago|reply
Due to acquisitions, we have 6 or 7 big messy monoliths and actually, 3-5 infrastructure stacks. This in turn means, we have something like 12 different user management systems (because each monolith contains a few layers of legacy user management, it'd be boring otherwise), 4-5 different file and image stores (some products just don't have one), 4-5 different search implementations. It's a bit of a messy zoo.
Our path out of that mess is to either extract, or re-implement functionality common to the products in smaller scale, standalone services - for example so you end up with a centralized user management, or a centralized attachment system. And these things certainly aren't micro, user management does a bit of search, a bit of SAML, a lot of OIDC, some CRUD for users, groups and such. If you wanted to be silly, this could be 5-6 "micro-services".
But realistically, why? We do gain advantages by having this in a smaller service - we can place a bit of extremely critical functionality in a small service and we can manage that service very, very gently. And we can reduce redundant efforts, even if migrations to this service takes some work. But what would some dedicated SAML-integration-spring-app improve, besides moving 1-2 tables into another database/schema?
[+] [-] JAlexoid|3 years ago|reply
Yes, you should build your system initially like that. One team builds one system and splits it out as necessary.
[+] [-] chucksta|3 years ago|reply
Well that point that just makes %90 of web articles moot.
[+] [-] tasuki|3 years ago|reply
I might think that, depending on the context.
Are you a multinational with thousands of monkeys at thousands of typewriters? Of course you shouldn't have everything in one system!
Are you a five person startup, out of which only three write code? Knowing nothing else, I'd suggest everything in one system.
[+] [-] SergeAx|3 years ago|reply
As long as you physically able to do that - absolutely yes.
[+] [-] UltraViolence|3 years ago|reply
Everyone ran with it and now we're stuck with thousands of fragile and unmaintainable systems which will ruin companies in the coming decades.
[+] [-] hbrn|3 years ago|reply
> Don't go extreme in any direction. Context context context
That is exactly why debate occurs. All religions, be it microservices or TDD, are taking some good ideas to an extreme and removing context from the decision making process.
Otherwise the term "microservices" wound never be invented: people were doing "services" since the dawn of time. But you can't create hype out of common sense, you have to go to the extremes.
[+] [-] jredwards|3 years ago|reply
> 1.your engineers feels like they have control over the code and are not afraid of doing changes.
I'd add: split your team up into a similar number of pieces. Your application will come to resemble the shape of your teams, over time. Design your teams based on where you want the boundaries of your application to fall.
[+] [-] yowlingcat|3 years ago|reply
a) engineers are afraid of or find making changes difficult b) uptime issues result in heavy endpoints/services taking down others
To your point, microservices should be introduced expressly to solve those two problems, and not before.
[+] [-] pantsforbirds|3 years ago|reply
We had a decent sized data pipeline that was entirely microservice and serverless and it was a real joy to work with. - Our NLP code lived in a service that ran on GPUs - our ingest service used a high ram, but relatively simple CPU service to do in-memory joins cheaply and efficiently - we had a bunch of specialized query services that were just direct requests to AWS services, or a light lambda wrapper around a call to a service.
Coordinated things with airflow and it was very easy to maintain and scaling was pretty efficient since we just scaled the pieces that needed it without wasting money on unneeded compute.
[+] [-] ch_sm|3 years ago|reply
I don’t really understand that argument, and I don’t really feel safer making a change to a microservice inside a large system, as opposed to making a change to a monolith — the consequences of a mistake are equal in both cases (although harder to observe/debug in a microservice architecture) — am I missing something?
[+] [-] herdcall|3 years ago|reply
[+] [-] hot_gril|3 years ago|reply
[+] [-] naasking|3 years ago|reply
[+] [-] jerf|3 years ago|reply
Microservices are not a miracle cure or the solution to every problem, but they do force divisions within the code base. Every microservice defines an interface for its input and its output. It may be the sloppiest, crappiest definition every, with dynamic types and ill-defined APIs and bizarre side effects, but it is some sort of definition, and that means if necessary, the entire microservice could be entirely replaced with some new chunk of code without affecting the rest of the system, cleanly cut along the API lines. This microservice may sloppily call dozens of others, but that can be seen and replicated. It may be called by a sloppy combination of other services, but the incoming API could be replicated.
However bad the architecture of the microservice may be, however bad the overall architecture of the microservice-based system as a whole may be, this will be true by structural necessity. The network layer defines some sort of module.
They can create big balls of spaghetti, certainly. They can in total create a big mess; they are not magical architectural magic by any means. While a full replacement of a given microservice is practical and possible, if the boundaries are not drawn correctly to start with, fixing that can be much harder in a microservice architecture (with corresponding separation of teams) than a monolith.
But they fail to create the situation that is what I would consider the distinguishing characteristic of a "Big Ball of Mud", where there are no partitions between anything at all. Big Balls of Mud have no equivalent of "replace this microservice". Microservices by necessity, to be "microservices", have partitions.
[+] [-] esailija|3 years ago|reply
Funny you said that since a microservice API is always dynamically typed and its usage cannot be checked by compiler. And the more microservices you use the more dynamically typed the whole project gets overall.
While you can opt into a free for all everything importing everything, all languages do also support creating modules which define APIs for consumer and maintain compile time type checking.
[+] [-] naasking|3 years ago|reply
This is bit of a strawman. The equivalent strawman criticism for microservices is that every function runs in its own networked service. Is that truly representative of the reality? Of course not, and neither is your breakdown of big ball of mud.
I think this is a fair analysis of the BBOM:
http://www.laputan.org/mud/mud.html
[+] [-] nprateem|3 years ago|reply
Still, crap devs will just find their own ways to mess up with microservices, but at least they limit the blast radius.
[+] [-] dagss|3 years ago|reply
"The entire point of the big ball of mud is that there are no meaningful divisions in the code.... Everything uses everything willy-nilly....not a single line of code in the system that you can change without fear of bringing something else down that you may not have even known they existed"
...then OK, perhaps the developers who caused the state you describe above would not cause the exact same problems with microservices -- but will they really move fast any not cause a mess given ANY kind of environment?
The state you describe is not normal of monoliths by any stretch.
It may be normal of old legacy systems with 5 generations of programmers on it. Also I believe microservices will have other kind of problems, but still deep problems, after 5 generations of programmers.
If preventing people from running in the completely wrong direction of the goal is your main concern -- why even be in the race at that point. Find new people to work with.
If you personally had success rewriting a Ball of Mud into microservices, consider if perhaps the "rewrite" is the important word (as well as quality of developers involved), not whetyer the refactor was to a new monolith or new microservices.
Microservices with boundaries drawn wrong can cause you to need to spend 20 programmers do the job of 1 programmer. Perhaps the mud looks different from a Big Ball, but it is still mud.
[+] [-] jasonhansel|3 years ago|reply
Therein lies the problem. Nobody draws these boundaries correctly on the first try, and the correct boundaries can shift rapidly over time as new features are added or requirements change.
[+] [-] drewcoo|3 years ago|reply
Do they? The code itself may be in entirely separate repos but still be tightly coupled. Monoliths can have cleanly separated libraries/modules, those modules built from separate repos or at the very least, different namespaces.
The "macroservices" I've been seeing are many separate containers all sharing at least one data store. So they have all of the disadvantage of the "ball of mud" monolith combined with all of the disadvantage of much more complicated infrastructure. Yet the people working on them think they're "doing microservices" because k8!
The microservice separation is not just code in separate repos. It's also everything else behind the kimono - keep that kimono clasped tightly!
[+] [-] randomdata|3 years ago|reply
Any time you lean on code in a random GitHub repository, where you never speak to the author and just use the API you're given, you're doing microservices. This works well enough so long as the product does what you need of it.
The problem is that when the product doesn't do what you need. If the microservices teams are under the same organization umbrella there is a strong inclination to start talking to other teams instead of building what's needed in house, which violates the only communicate by well defined APIs. This is where the ball of mud enters.
If your organization is such that you can call up someone on another team, you don't need microservices. They're for places so big that it is impossible to keep track of who is who and your coworkers may as well be some random GitHub repository.
[+] [-] dmak|3 years ago|reply
No. That's not what a microservice is.
I understand you are trying to draw analogies, but a library is not a considered a microservice.
[+] [-] quonn|3 years ago|reply
So a library is a microservice now?
[+] [-] JAlexoid|3 years ago|reply
This substitution is what is commonly known as a Strawman argument. You misrepresented an argument, to discredit it easier.
[+] [-] gryn|3 years ago|reply
[+] [-] nxpnsv|3 years ago|reply
[+] [-] PaulHoule|3 years ago|reply
There is a certain kind of "freedom" that is really slavery, but people feel so free when they hear about it is they often squee and hurt themselves with uncontrolled movements.
Microservices can be that way. Now that you have 25 different services in 25 different address spaces you can write them in 11 different languages and even use 4 versions of Python and 3 versions of Java. (I got driven nuts years ago in a system that had some Java 6 processes and some Java 7 processes and it turned out the XML serialization worked very differently in those versions.)
If you want to be productive with microservices you have to do the opposite: you have to standardize build, deployment, configuration, serialization, logging, and many "little" things that are essential but secondary to the application. If a coder working on service #17 has to learn a large number of details to write correct code they are always going to be complaining they are dealing with a "large ball of mud". If those little things are standardized you can jump to service #3 or #7 and not have it be a research project to figure out "how do i log a message?"
[+] [-] jameshart|3 years ago|reply
I bet when you’re driving, you also tend to notice that everyone else is either an idiot driving way too slow in the middle lane or a maniac speeding past you. Nobody else drives as well as you do.
It must be exhausting for these people to live in a world surrounded by strawmen, while they alone have achieved perfection. Which they will detail how they do in a later blogpost.
[+] [-] mainguy|3 years ago|reply
[+] [-] 0x445442|3 years ago|reply
I think most places rearchitect to microservices because it’s the new shiny. They don’t do the engineering necessary to create a detailed cost/benefit analysis, they just feel it will be better and so they jump in.
For the same reasons the companies don’t do the cost/benefit analysis they don’t spend much time thinking about how they could benefit from rearchitecting their monolith into various libraries, modules, packages and interfaces.
Because they don’t think much about these code boundaries, they end up turning their monolith into a distributed monolith. In doing so they don’t get the major benefits microservices are meant to provide, such as independent code deployment. They also lose the benefits of a monolith, such as less ancillary complexity. This situation is the norm and is evidenced by “deployment parties” where you can’t just deploy one microservice because 11 of them need to go to prod together.
What I have seen a lot of over the past few years is a push to get off main frames and into the cloud. This is a valid driver for rearchitecting but microservices are just one of a number of solutions as the cloud is very flexible these days.
I assert that a lot of rearchitecting to microservices can be attributed to the fact that our industry, as Alan Kay has said, is a Cargo Cult.
[+] [-] boudin|3 years ago|reply
Micro-services makes sense in specific cases and doesn't in some other, the same as monolith is an absolute no-go in some cases but a really good fit for some others.
The correct choice is always the simplest for what you need, the tricky bit is understanding what you actually need. The right choice might be a complex solution because your needs require some complexity.
Going for micro-services just for the sake of it, without the need for it is a bad choice, but it doesn't mean that micro-services are bad.
[+] [-] monero-xmr|3 years ago|reply
[+] [-] bob1029|3 years ago|reply
What you probably want is something where everything lives in the same repository, but as separate modules/dlls which can be included in some common execution framework the team previously agreed upon.
If you have something approximating microservices-as-dlls, then you are essentially eating cake while having cake when you really think about it. Function calls are still direct (sometimes even inlined), but you could quickly take that same DLL and wrap it with a web server and put it on its own box if needed.
Establishing clear compilation unit boundaries without involving network calls is the best path for us, and I suspect it's the best path for anyone to start with. We take this "don't involve the network" philosophy into our persistence layer too. SQLite is much easier to manage compared to the alternatives.
[+] [-] John23832|3 years ago|reply
When the organization gets big enough (AND ONLY THEN), add an additional domain oriented service. FULLY implement deployment and infra. Only once you do that can you think about adding another (using the pattern you just built out).
Micro-monoliths.
Organizations explode the number of services, half ass the infrastructure (the hard part of microservices), and then crumble under the organizational complexity.
[+] [-] BulgarianIdiot|3 years ago|reply
We went from "services should be no more than 100 lines of code" to "testing and maintaining thousands of interconnected microservices is TERRIBLE IT TURNS OUT".
The secret here is that all simple answers are wrong. Your services are too small and too big at the same time. Finding balance is hard. Zen Buddhists call it the "Middle Way".
The best design is always an uneasy intersection of many approaches and concerns, and also it's the concept of what you decide NOT to do, so you have more resource to focus on what TO DO.
Also we keep overanalyzing how we do services in isolation, when the complexity comes not from each of them alone, but how they interact. To solve this complexity you need clear, aligned flows. More like laminar flow. Less like turbulence.
[+] [-] DavorDK|3 years ago|reply
My understanding of whether or not you should take a monolith and cut it into pieces is that it depends on what you want to achieve.
Every monolith is specific, or are they? Without knowing what your product does I bet you have an API, a UI layer or two, some business logic and maybe throw emailing or a payment service. Well, guess what? We all have those!
How to decide. For myself, I’ve tried to boil it down to 3 questions:
1. Will I need to scale this part of the monolith more (often) than others? 2. Does this part of the monolith handle an entire process on its own from start to finish? 3. Does this part of the monolith require a lot of different code or resources than the other parts?
The questions are simple. They aren't philosophical. They don’t have a hidden meaning. Rather, a series of simple booleans. If something needs to be a microservice it'll most likely hit 3 out of 3 of those.
[+] [-] athenot|3 years ago|reply
Most services can be successfully implemented with monoliths (and relational DBs for that matter). Only when that solution doesn't scale anymore, that's when microservices come in handy. Particularly when a large service has core functionality that must always run and secondary functionality that can tolerate higher rates of failure.
[+] [-] rahoulb|3 years ago|reply
[+] [-] _ea1k|3 years ago|reply
- The codebase had three primary responsibilities
- None of those functions overlapped in functionality and didn't share any significant code
- They were all written by different people with subtly different styles
- They used infrastructure code for talking to third party services in subtly different ways that made upgrading dependencies difficult
In theory, they were all within the same business domain, so the types that think one business domain equals one service clumped them together. This made little sense.
On the opposite side, I've seen microservices where all the little services depended on one another in complicated ways that made them all a brittle mess.
Finding the right solution to each problem is always the real challenge.
[+] [-] pier25|3 years ago|reply
For example I'm currently working on an audio hosting service. The main app is a monolith where 90% of the code resides but there are a couple of ancillary services.
Audio encoding (which is heavily CPU bound) is a serverless microservice that can scale up and down as needed. Users don't upload content constantly, but when they do, you want to be able to encode stuff concurrently without blocking the main app. Audio streaming is also a serverless microservice because hey for every user uploading content you can have 1000x consumers (or more).
[+] [-] nokya|3 years ago|reply
When you watch this from the outside (e.g., let's say, as a consultant called to advise on a very specific aspect of software architectures) it feels like they all follow a secret agreement that instructs them to go full microservices route. Questioning this, even as a consultant paid to do exactly that, is considered unacceptable. It's like questioning your client's religious or political beliefs.
I observe a similar trend with CIOs hired to help institutions digitally transform themselves. Many operate on an "innovative" reasoning that consists in going full cloud and lay off IT personnel, which inevitably leads to increased operating and maintenance costs without exploiting the actual benefits of cloud computing. But it's already too late, Mr/Mrs CIO has already left the org when this happens. And self-congratulating words are already published in their LinkedIn profile.
I often have two thoughts when I attend a pre-sales meeting with a prospect customer that shows us a beautiful microservices architecture:
1. Oh my...
2. Shut up and just take the money they are throwing at your face.
[+] [-] bradwood|3 years ago|reply
Is my setup "microservices"? Well, maybe, depending on your definition, but, in truth I don't really care - it works pretty well.
We also do "DDD" with it and have multiple AWS accounts with these marking the domain borders. Comms between the accounts is via eventbridge or (very rarely) inter-account API invocation.
This allows many of the benefits of microservices, without the pain of dealing with k8s. Clean separation of domains, reduced cognitive loads for teams, each of which looks after all the stuff in a single account (so-called feature teams, where each team designs/manages and runs everything in that account/domain).
The hard bit was defining the domain borders, and the inter-domain protocols/interactions, but, once this is well defined, things work pretty well.
Having come from a k8s world, this setup feels so much nicer, and lighter, and easier to get stuff built in a both fast and performant way.
It would be interesting to see trends away from _classical_ "cloud-native" (k8s) setups with microservices, to true serverless setups. I wonder how much of k8s's lunch serverless has managed to eat so far.
[+] [-] abunuwas|3 years ago|reply