top | item 25498079

I've been merging microservices back into the monolith

282 points| rmason | 5 years ago |bennadel.com | reply

196 comments

order
[+] TeeWEE|5 years ago|reply
It's not about microservices or not, it's about domain boundaries, application architecture, coupling. Whether it runs in a service or in a monolith isn't the deciding factor of wether its good or bad.

Architecture and design is. A 100 services app with 5000 lines of code each could be really badly designed and entangled, and a monolith with 500.0000 lines of code could be very well designed...

It's about code quality, right abstractions, decoupling, cohesion. Correct domain modelling Etc.

Microservices are just a tool, but you have to have a good design first.

[+] eternalban|5 years ago|reply
From a technical point of view, we're on the same page. But then there is the reality of modern software development.

A few years ago, the new CTO of a Union Square Ventures' investment tapped me as technical advisor to review this startups efforts to expand their software to new verticals. He came in as part of a reshuffle. He had brought his Director of Engineering from his previous (well known) company, and there was even a Star CEO (which is the reason I signed up!) with a monumentally impressive CV. As compensation, I was billing lawyer hourly rates.

After two months of reviewing their existing architecture, and picking the brains of the dev, support, and sales team the new verticals, I came with an architectural solution to their immediate and long-term needs. I gave them exactly what they said they wanted.

What then happened is that, for one, the Star CEO turned out to be an echo chamber of "growth growth growth" and entirely disappointing as a leader of a software company. And this was a SV veteran from both HW and SW side of SV. Huge disappointment.

The CTO, who was entirely clueless, apparently gave in to back channel efforts of his DoE to disregard my work product. A few days later I walk into an all hands meeting where the "next generation" was announced to be "a ball of mud" of "microservices". This same CTO later took me aside and basically offered to give me hush money ("will you be my personal advisor?") to keep quiet, while getting the same (insanely high) hourly rate for the remainder of my contract.

So this is the reality of modern software development. Architecture is simply not valued at extremities of tech orgnizations: the bottom ranks are young developers who simply don't know enough to appreciate thoughtful design. And the management is facing incentives that simply do NOT align with thoughtful design and development.

[+] globular-toast|5 years ago|reply
Yeah. One thing I've noticed is that people are really bad at working towards high-level, nebulous goals like "code quality, right abstractions, decoupling, cohesion". Keeping these goals in mind requires constant thought, analysis and decision making as each case is different and there's no universal solution.

So people jump at simple, single-faceted goals like "normalise the database", "spin out microservices", "don't use HTML tables", "rewrite using Rust" etc. These involve no further thought, no continuous evaluation of nebulous targets like quality, no keeping up with moving goals.

It explains quite a lot of problems with politics too. Choose from red or blue. No further thinking required. If even that's too difficult, just pick the one your parents/friends picked. Easy!

[+] pjmlp|5 years ago|reply
Indeed, if the team is not able to write modular code, it is not putting a network in the middle that it will sort it out, the outcome is just spaghetti network calls.
[+] fatnoah|5 years ago|reply
>it's about domain boundaries, application architecture, coupling.

Thank you! As a service consumer, if I need to call the same 3 other services for EVERY OPERATION, then we've achieved microservice hell.

>monolith with 500.0000 lines of code could be very well designed... As someone who was part of a "de-monolith" effort, the first step of our process was to create internal service facades that grouped things according to the logical domains that aligned well with our required functionality, data storage, and availability requirements.

Once those were functional, decoupled, and comprehensible, we actually had an easy to maintain, clean monolithic code base that exposed a small handful of services that were de/loosely coupled and conformed to workflows we need to support via web services.

[+] collyw|5 years ago|reply
This is something that is totally underappreciated in software. I guess it's difficult to do and takes experience to get right. And probably more difficult to explain how to do to others.

Having worked on enough crap code in the past (and right now) my primary thoughts are to try and make things understandable for the next dev that needs to pick it up after me.

[+] Animats|5 years ago|reply
A real issue with microservices is that you can no longer use the database system for transactional integrity. Microservices are fine for things you never have to back out. But if something can go wrong that requires a clean cancel of an entire transaction, now you need all the logic for that in the microservice calls. Which is very hard to get right for the cases where either side fails.

(Today's interaction with phone support: Convincing Smart and Final that when their third-party delivery company had their systems go down, they didn't actually deliver the order, even though they generated a receipt for it.)

[+] insertnickname|5 years ago|reply
Jimmy Bogard's talk "Six Little Lines of Fail"[0] is an interesting case study of all the ways things get so much more complicated when you can't just roll back a transaction if something goes wrong.

[0]: https://youtu.be/VvUdvte1V3s

[+] gizzlon|5 years ago|reply
Yeah, and while things like the Saga pattern[1] is interesting, it hasn't impressed me with its elegance or cleverness :[ (also, there seems to many real life cases where it would not work)

1: https://www.youtube.com/watch?v=xDuwrtwYHu8

[+] bob1029|5 years ago|reply
Monolithic software maintained in a single computer/repository is so much better than software spread across many computers and repositories. There are some first-order arguments against a monolithic codebase, but the higher-order points will win out every time. I.e. the reason you want a separate codebase for each "concern" is probably because no one figured out how to work together as a team toward a common objective. This lack of teamwork will ultimately kill your project if you do not resolve it, so whether or not you do microservices in the expected way (i.e. HTTP RPC) will make no difference either way.

If you still want "micro" services because it actually makes technological sense, you can build these at a lower level of abstraction under some /Services folder. Many languages have namespaces, which allow for boundless organizational schemes. Only for a lack of imagination would you fail to organize even the most complex of applications in this manner.

[+] phodge|5 years ago|reply
"Monolith or Microservices" and "Monorepo vs Polyrepo" are best treated as two entirely separate engineering questions. You can have a Monolith built from code in many repos, or many microservices maintained in one big repo.

There are various pros+cons for whichever combination you choose, there's no "best" answer that applies to every single situation.

[+] jlouis|5 years ago|reply
Microservices tend to provide good isolation barriers in systems and those tend to increase stability of highly complex systems.

Though Erlang is a counterexample.

[+] paperpunk|5 years ago|reply
Out of curiosity, what approaches do you use to manage zero-downtime deployments during system operation with a monolith?
[+] slad|5 years ago|reply
I have worked at several companies with SOA. In fact, I was lead at one of the companies where we were breaking monolith to smaller services. We were having lots of issues with scalability with monolith. First we tried breaking to scale it horizontally by creating shards and routing users to different shards. That helped but with the growth we were seeing, we were back to scaling issue in 14 months. We broke it further into 4 shards and started working on SOA. After year and half we had dozens of smaller services and scaling was really smaller problem as it boiled down to scaling specific service. Over all, few points to add for without regards to SOA that I didn’t see here in threads -

- smaller blast radius: every change is small and specific to service so easy to rollback and understand the impact

- load tests: capacity management was relatively easy; small services with small dependencies

- easier to update dependencies: java version updates was not huge project with every feature development on hold

- autonomy: team had more autonomy as it didn’t require committee approvals

- prolific design patterns: services could use different architectural patterns

This obviously came with lot of other issues - latencies, cross service atomocity, logs correlation. But at the end I believe pros outweigh the cons and I would continue to use SOA pattern wherever I could.

Industry has been trending towards microservices/lambdas which in my opinion take it too far. Finding that balance between Monolith and micro service is what works in my opinion.

[+] vkazanov|5 years ago|reply
I work at a company that uses a hybrid monolith/service approach to serve millions and millions and millions of users. Unless you work for one of the top-top-top social networks we probably serve many more users that your systems do.

And even our existing services are not "micro": they are relatively large, and were mostly extracted to handle tasks that needed a different language.

This massive code base is a very good example of how most people can do just fine with a monolith. I even learned to appreciate this uniform approach to things: monorepo, unified ci/cd process, shared responsibility.

[+] tannhaeuser|5 years ago|reply
It would appear to make sense to separate two things

- a service as isolated business logic with clean requirements process, ownership, SLAs, interfaces, testing, and build infra for maintenance

- a service as an endpoint you can access from most any environment (other services in whatever language, web apps)

The trick is to keep these two things apart and assign services to physical/virtual nodes/pods/whatever as late as possible rather than making deployment decisions through choosing implementation techniques. Eg it's not reasonable to expect scalability by deploying individual services to a large number of nodes with excessive granularity of services; having the option to deploy a called service on the same host as the calling service to have essentially no network overhead might make more sense. It's also not reasonable to attempt to scale out services to a large number of nodes when your bottleneck is an RDBMS or other storage.

This was already very clear with 2nd gen SOA architectures like SCA (service component architecture) around 2007 or so, with options for binding implementations to remote protocols (SOAP) or locally via procedure calls, or both at the same time. This separation is notably absent from microservice architectures which always want to produce a pod or vm image as result artifact.

Now SOAP (and SCA and other SOA frameworks) also allowed transactions and auth context propagation; something that isn't even on the radar of microservice-like approaches. The (many) ones I saw at customers at least only naively implemented the happy path, not allowing for two-phase commit or at least compensation services to be called on abort by an aggregating service.

[+] bertjk|5 years ago|reply
In your opinion what is the difference between an SOA and microservice architecture? When does a microservice become just an SOA service or vice versa?
[+] konschubert|5 years ago|reply
> - prolific design patterns: services could use different architectural patterns

Very true.

But there is still a lot of architecture that crosses microservice boundaries: How the domain is split, the dependency hierarchy (or lack thereof) between services, data lifecycles, ...

It's important not to forget that.

[+] dpix|5 years ago|reply
This is a great article - One thing I always try and vouch for is that you don't need to go "all-in" on microservices. There is nothing wrong with having a main monolithic application with a few parts separated out into microservices that are necessary for performance or team management.

The author hits the nail on the head at the end:

  If I could go back and redo our early microservice attempts, I would 100% start by focusing on all the "CPU bound" functionality first: image processing and resizing, thumbnail generation, PDF exporting, PDF importing, file versioning with rdiff, ZIP archive generation. I would have broken teams out along those boundaries, and have them create "pure" services that dealt with nothing but Inputs and Outputs (ie, no "integration databases", no "shared file systems") such that every other service could consume them while maintaining loose-coupling.
[+] axegon_|5 years ago|reply
"Future thinking is future trashing".

Though I've had mixed feelings about this quote, in the case of the author this is probably valid. Yes, microservices are not a bad idea. They are in fact a great idea and they are needed SOMETIMES. But there is a time and place for everything. They are unnecessary and completely counterproductive and cumbersome unless you have a very large and complex system which needs to be distributed. If that is the case, of course, god speed. But if you can work without them you should not go for a microservice architecture: let's face it, they are hard to design, develop and in many cases a nightmare to debug when something goes wrong. Annoyingly the term "microservice" is yet another PR campaign gone horribly wrong. As a consequence it has become a buzzword like blockchain, AI, agile, etc. Just because FAANG is doing it, does not mean that your online shop for selling socks needs microservices in any shape or form.

A short while ago a friend dragged me in as a side consultant for one of his clients. He owned a site which is basically craigslist for musical instruments and he was in the process of hiring a company to rebuild and subsequently modernize his site. Realistically the site has around 5k users/day tops and no more than a GB of traffc, and a mysql database which over the course of 15 years is less than 50GB. Basically a tiny site. The company he was about to hire had designed an architecture, which involved a large EKS cluster, 15 microservices, mysql, postgeres, elastic, memcache and redis, complicated grpc communication and their claims were that this architecture would make it the best in the business. OK, let's give them the benefit of a doubt, I advised him to ask them how much more performance would he gain out of that and if there are any other advantage over the typical vps with a webserver. Their response was "Infinitely more performant, you'll be able to scale to 100 million users per day and the system wouldn't feel a thing. That is what Google, Facebook and all other large companies are doing". Yeah... And does he expect to have 100 million users pour in at any point in time? Take a wild guess... Mind you, they asked for the same amount of money for a simple "buy now" website.

[+] proverbialbunny|5 years ago|reply
The bit about Conway's Law

>Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure.

is mind blowing. How did I not notice this the whole time?

Awesome article. <3

[+] eightysixfour|5 years ago|reply
A lot of the job of effective large organizational software/management consultants is understanding where there are mismatches between org structure and software architecture and helping to drive changes to get those in alignment. A big part of this is finding better places to put the interfaces (whether they are human or API), because they're usually holdovers from the past and no longer related to the business structure or architecture.
[+] steerpike|5 years ago|reply
It's colloquially described as "shipping the org chart". Which I always liked as a way to describe it to a wider audience.
[+] vishnugupta|5 years ago|reply
It takes one to "experience" this phenomenon from the inside of an org to appreciate it. Analogous to how one would have to work on largish codebase/architectures to appreciate design patterns.

I have a corollary to Conway's Law.

The number of network hops an end customer's request goes through before it is served is directly proportional to the number of teams in that org.

A native English speaker could help me simplify that sentence :-)

[+] ako|5 years ago|reply
You can use this to change the architecture of your software: change the structure of your teams, and your software architecture will follow.
[+] baq|5 years ago|reply
note the law doesn't specify if this is by necessity/on purpose - but designing a system which doesn't do that is usually setting yourself up for failure.
[+] posharma|5 years ago|reply
Oh the amount of time our industry wastes behind technology for the sake of technology! First, let's write tons of articles extolling the virtues of microservices. Then let's counter that with tons extolling the virtues of using microservices the right way. Hype followed by wisdom and then go back to hype again! When are we going to mature! Boring is good and predictable.
[+] avl999|5 years ago|reply
It's insane to me that for a few years so many organizations were somehow convinced that MONSTROSITIES like trying to do distributed transactions using things like 2 phase commit just to make the almighty microservices god happy was an acceptable way of doing things.

The first law of Microservice architecture should have been- do the microservice boundaries you have defined result in the need for cross service transactions? If so, slam on those breaks hard.

[+] ajsharp|5 years ago|reply
The author correctly identifies the problem as part technical, part people. My shot-in-the-dark estimation is probably 70% of teams doing microservices do it to solve the people problem before (if ever) they need it to solve the technical problem.

Technical solutions are _usually_ bad solutions for people problems, and architectural patterns are probably even worse solutions at solving the problem of human collaboration. It doesn't help that microservices are mostly a better-sounding name for "SOA, but smaller", that has grown in prominence mostly to sell you hosting for your very many microservices. Microservices takes one of the hardest and most important parts of of SOA (service boundaries) and replaces it with...smaller.

Glad to see someone at a larger company publishing about this.

[+] throwaway9870|5 years ago|reply
We used services to great effect at the company I previously worked at. There was not a single engineer that ever voiced a concern over how things worked. A couple important notes:

* We used a mono repo and services only had to talk to other services from the same commit (or branch). We didn't version service APIs, I think that would have been a huge waste of time for us.

* Our tests did unit tests and integration tests.

* We could use different languages as the problems required. We started one data processing service in Python and migrated it to Go when it became more CPU bound than we were comfortable with. We have an old but extremely reliable C++ service that had been running for almost a decade.

* Each service ran in its own container. When a container/vm would run out of memory, we all didn't have to scramble to figure out who broke something. The person who owned the code for that service dealt with it. If a service started kicking out errors nobody but the owner had to be distracted.

* We had a good team of architects that understood how to build things. We didn't split services across atomic boundaries. We knew how to build interfaces that handled errors robustly. If you don't know how to do these things, you service APIs can quickly become a mess.

* The world is services. Just think about AWS and how it works. There is no such thing as a monolith, eventually you interface to services. Be it a weather API, a CI server, deployment system, a central logger, etc. Just because you didn't write all those services doesn't mean you don't have a service architecture.

* We had really disjoint services that would have made zero sense to put together. For example, the web server that handled the front-end vs a back-end data system that refreshed a product database and built PDF assets based on the data in overnight batch runs. We actually had many of the later for different customers. We had services for central logging, for monitoring, for metric storage, etc. No way would I ever want to push all that into a single service and debug a GC issue or segfault.

* If we had to hotfix a service, we could deploy just that service with zero concern the other services would have an issue.

For us, services were a key part of building a very reliable system that we could update rapidly with minimal risk. We did this for almost a decade. We had a very good group of engineers and never did I hear one of them say services were holding us back. After this experience, I would say everyone one of them is an advocate for the appropriate use services.

[+] fourseventy|5 years ago|reply
Seems like microservices has moved past the "peak of inflated expectations" and into the "Trough of Disillusionment" phase of the hype lifecycle.

Thank god. I'm tired of semi-technical product managers throwing around the term.

[+] DamnYuppie|5 years ago|reply
I agree this is a good thing. Now that knowledge just needs to be accelerated up the corporate ladder to CTO's/CEO's.
[+] rualca|5 years ago|reply
> Seems like microservices has moved past the "peak of inflated expectations" and into the "Trough of Disillusionment" phase of the hype lifecycle.

The only change I've been witnessing with regards to microservices is where their critics place their personal goalpost.

Microservices is a buzzword that is used as synonym for distributed systems and the evolution of service-oriented architectures after removing the constraints of rigid interface-related XML-based technologies like UDDI and WISDL in favour of ad-hoc interfaces. Some responsibilities are moved to dedicated services when the operations side justifies it, and services are developed based on key design principles to reflect lessons that have been learned throughout the past couple of decades.

But even if the hype associated with a buzzword comes and goes, the concepts and its uses are still the same.

[+] pjmlp|5 years ago|reply
Unfortunately containers are still into the "peak of inflated expectations".
[+] suyash|5 years ago|reply
Glad to see a reality check post around the hype of Microservices.
[+] eightysixfour|5 years ago|reply
This article absolutely nails it. Finally, recognition that microservices are a technical tool for solving a people and organizational problem. We need to understand that a lot of "new" technology paradigms (especially those coming down from FAANG or other large organizations) are often designed to solve the organizational problems of operating at scale, not to provide some kind of technical panacea we should all aspire to built towards.
[+] rahoulb|5 years ago|reply
My take on this (which I've not really thought through).

With O-O programming - the code itself is fine. You're breaking it up into smaller objects, each well defined, each with self-contained state. Nice and easy.

The problem is the behaviour of the system isn't defined in that code. Instead it's defined in the pattern of messages that are sent over time across different bits of code. And because it's temporal, that pattern isn't written into the actual code itself - but to understand what the system is doing, you need to understand that pattern.

The same goes for micro-services.

Each individual service can manage its individual state, it can be architected and designed in the best way for that particular requirement. But again, the behaviour of the system as a whole depends on how the different services interact (and just as importantly, fail - which is where transactions and other methods of coordinating across your code become important).

So (and I write this as a dyed in the wool O-O programmer), in both cases, the surface problem is solved but all we do is move it into a harder to observe place.

[+] glouwbug|5 years ago|reply
My last job had 43 microservices, one per SQL table. Best part, it was across 8 repos, and shared a core library. Even better, it was python, so imagine adding an additional functions argument to the core library, and updating the caller in 7 other repos, pushing 7 PRs, and it still breaking 3 weeks later in production because you missed a function argument in one of the repos
[+] jiraticketmach|5 years ago|reply
Microservices were/are an attempt of standardizing software development practices at really large organizations.

For example: your typical big bank has hundreds or thousands of teams (both internal and through professional services companies) developing all kinds of applications under different technologies/hardware.

In this case due to domain/organizational scale it is not feasible to have a mono repo (or 2, or 3) with a giant codebase.

And as a result the typical situation is an expensive hell full of code duplication, ad-hoc integration methods, nightmarish deployments and such.

Microservices are helpful in situations like this (even if they are not perfect).

But for a company with a couple of development teams and a domain that can be understood almost entirely by a couple of business analysts, it´s overkill.

[+] 3bodyProblem|5 years ago|reply
I don't really see what this article nails. So, if I understand the argument correctly. Microservices were used to increase development speed. While old pieces were left behind (legacy) new systems have come up to modernize (these are still microservices)? His team, now responsible for many, legacy unloved old microservices, are being merged back into a monolith. The real question is, is remerging all the code back into 1 application the right solution for their problems they had with stability.

I think mental models are important, and having a huge blob of unrelated roles, makes sense to the current development team. But won't, just like the old situation, to the new developers.

Perhaps it's just the clickbait article, but a better title would have been. "Homogenizing our wild-west legacy microservices".

For me personally microservices was a god send, working on getting stuff done, instead of dealing with ancient code that doesn't reflect the current business anymore.

I still buy in the thought of, if you can't develop a great monolith, you sure won't be great at building microservices. modular-monolith is the cool thing currently. Create a Monolith, without the shortcuts that create problems in the long run. Public interfaces are your most valuable pieces in the system. Worship them, code-review them, fight about them. Currently I could care less about the implementation itself. Does it solve our problems, is it fast enough, is it tested great, ship it. What language you used, architecture. database, I don't care. Just make sure it's a joy to use from the outside.

If more developers would spend longer on thinking about the problems and less with throwing large amount of code that makes them feel smart. Making a microservice doesn't fix that problem.

Think that what is missing is the stability that microservice are able to give in its most optimal form. Each service being the main source, the second it leaves the system it is stale reference data. Is stability important? use the old reference data, how fresh does your data really needs to be.

[+] _y5hn|5 years ago|reply
This is about a legacy system, and scaling back to a monolith, because of that need.

Not holding my breath in anticipation this won't be misapplied and cargo culted.

[+] lmilcin|5 years ago|reply
I have been working in couple organizations where I have thoroughly explained why we should undo the "microservices" approach and go back to monolythic application.

Basically, the problem is that in most cases going to microservices is being pushed by management to complete exclusion of understanding whether the organization is ready to implement it.

Microservices require that you have mature approach to many topics. You need to really have nailed down automated deployments and you really need to have nailed down automatically creating development infrastructure.

In one company I had a tool where I could log in, give a new project name, click couple parameters like namespace and a completely new microservice would be generated with BitBucket, Jira, automated deployments, confluence space, etc.

If you need to spend time to configure anything individually for a project, to do deployments, etc., you are not yet ready to do microservices.

So in all those cases where we have scaled back on microservices the developers switched from developing the application to being full time engaged with managing the application and its numerous configurations.

In one of the applications we had 140 services rolled back to one. Before, preparing the deployment would take 2 weeks as the developers would be sending and compiling emails with information what needed to be done where. Then the deployment would take one day as an engineer would be executing all those runbook instructions. Each of the 140 services had its own version which made ensuring correct versions a separate problem.

After the change where we have rolled it all into a single service under a single version, the entire thing took 2h. Still manually, but way cheaper and more reliably.