top | item 10440094

In Defence of Monoliths

82 points| nkurz | 10 years ago |techblog.bozho.net | reply

74 comments

order
[+] unoti|10 years ago|reply
There's a dangerous, contagious illness that developers of every generation get that causes them to worry about architecture and getting "street cred" even more than they worry about solving business problems. I've fallen victim to this myself, because street cred is important to me. But it's a trap.

An important meta-idea that encapsulates all of this is remember to solve your business problems as your first priority, and implement technology in service to that goal first and foremost.

It's far too easy to put various technology decisions in the driver seat, ahead of the business problems. This leads to all kinds of anti-patterns over the years. For example, I saw software development organizations destroy themselves trying to do 100% pure UML-first software using Oracle's Designer/2000 and Developer/2000 tools, because that was just the "right" way to do things. I've seen organizations destroy themselves in the morass of trying to do things with Enterprise Java Beans (EJB) because it was just the right way to do things. I've seen companies spend 10x more time getting their automated build processes and super duper full test coverage going than they spend trying to actually write software.

And I've seen companies navel-gaze over their processes and methodologies and buzzword compliance far more than they worry about meeting the needs of their customers. If you've ever worked for a company with a 15 year history of ironclad determination to use all of Microsoft's latest preferred data access methodologies, then you have first-hand knowledge of another example of this.

I'm not saying microservice architecture is a worthless fad, but I am saying that putting your business needs into the done-basket absolutely must be the first priority; never lose sight of it. I'm all for being wary of technical debt, but "technical debt" isn't nearly as dangerous as whatever you call that contagious illness that infects developers and makes them put architecture before business needs.

[+] Walkman|10 years ago|reply
I agree with you, but the opposite of this is also horrible. I'm working for a company where they only focused on business needs, and the result is horrible. Terrible, unwieldy codebases, full of bugs, have no security at all. It's hard to develop features, it suffers from feature creep, have absolutely no test coverage at all.

It's important to find balance.

[+] brianmcc|10 years ago|reply
Ha, so true. Learned that lesson with Rational in my case :-)

Plus the compulsory EJB pain. I found it quite liberating to be able to just abandon EJBs once it became clear what a disaster they were without having to do do the Emperor's New Clothes bit.

I kind of share the scepticism about shiny new paradigms now, but one does have to be mindful not to inadvertently reject good new things because they're new.

[+] pjmlp|10 years ago|reply
It would help a lot if those developers spend some time in companies that don't produce software as their main business.

Where software is seen as a cost center and is actually developed 100% by contractors that fine-tune existing code bases across multiple years.

I guess there isn't better cure to those that tend to follow each fad that pops up.

[+] djb_hackernews|10 years ago|reply
The thing to keep in mind is a minority of software developers are actually working on software with actual customers.

So when you don't have customers you get to anti-pattern and navel-gaze, because then, at least people are doing something.

[+] ap22213|10 years ago|reply
It's important to solve problems, but if you want to win at this startup game, how you solve those problems can make or break you.

In most cases there others solving the same problem. If you are the team bringing a better solution to market faster, with better design, higher quality, less maintenance, lower infrastructure costs, fewer developers - you win easier and earlier. But, that takes some dabbling and experimenting.

In the end, it's really about working with high quality people. A team of focused, pragmatic, informed, and disciplined people can take the right new tool and become an order of magnitude better. And, they can take the wrong new tool and know when to shelve it quickly.

The winners in this game aren't writing COBOL against mainframes - even though that would probably yield a sufficient solution eventually.

[+] zenbowman|10 years ago|reply
Very well said.

A clean data model, one aligned with business needs, and sensitive to the nature of machines, makes the best architecture.

Far too many decisions are based on what is trendy and fashionable rather than what is right for the domain.

[+] ExpiredLink|10 years ago|reply
> more than they worry about solving business problems.

That's why companies implement Agile. It's all about "solving business problems". No architecture, no design, no documentation (merely 'user stories'). Dump and re-write instead of maintain. All your problems solved!

[+] timothycrosley|10 years ago|reply
It depends how you do micro services. There are middle grounds. One big gain of micro-services is that it guarantees things are separate and can be handled by separate teams if the need arises. That doesn't mean you need to start out that way. For instance in Python I use hug to create my microservices https://github.com/timothycrosley/hug, then I can just install them to create a "monolithic" services that consumes all the microservices, the great thing is that hug allows you to expose both as a webservices, and as a Python library so I can consume as a Python library with no overhead, until the need to split is evident, and then can split up the services with very little work. Of course the need may never arrive, but the modularity that is forced when using micro-services pays dividends quickly regardless
[+] eropple|10 years ago|reply
This is more or less what I'm doing right now with Dropwizard. I started building this app as a set of separated microservices, but was finding it to slow me down more than I wanted it to--so I flipped it upside-down and loaded every "application" into a single Dropwizard service. Which is uncomfortably like application servers, but I'm using this almost exclusively to get around doing the inter-service wire-up that I'd have to do manually, rather than letting my DI container do it. Each module's still separated, and if we need to fork services off in the future, it's about twelve lines of code.
[+] untothebreach|10 years ago|reply
Huh, I had never seen `hug`, thanks for mentioning it!
[+] cpitman|10 years ago|reply
> decentralize all things – well, the services are still logically coupled, no matter how you split them.

I think this is missing the point. It isn't just about decentralizing services, it is also about allowing you to decentralize teams and decision making. With very large teams working on a single monolith, features that are complete often cannot be deployed because of the larger organizational overhead of planning and coordinating a release. The higher coupling in most monoliths also restricts a teams ability to try new things and take risks.

If performance of technology is the only metric that matters to you, then yes, microservices are probably a horrible idea. If you are having difficulty scaling your teams, then it might be worth looking at.

[+] vonmoltke|10 years ago|reply
I don't see how microservices makes that overhead go away. That overhead is really a matter of proper systems engineering. With well-defined interfaces and properly-sectioned functional components teams can work just as independently on a monolithic application as they can on a microservice application. Similarly, a poorly-sectioned application with bad or nebulous interfaces will be a planning and coordination nightmare regardless of the architecture.

What I think the microservice architecture does is force some measure of good systems engineering on a team. You can't design a microservice application without putting effort into deciding how to divide the functionality and defining how those functional blocks will work with each other. You can get away with not doing that (until it's too late) when designing a monolithic application.

[+] pdpi|10 years ago|reply
> The higher coupling in most monoliths also restricts a teams ability to try new things and take risks.

I think it's you who's missing the point. That right there is why.

Turning your monolith into a micro-service architecture should add up to basically: - Wrap your internal modules in your favourite form of RPC - Replace the modules with stubs that call the RPC

If you need more work than that, the problem with your application isn't being a monolith, it's having functionality be way too tightly coupled.

If turning your application into a micro-service architecture actually _is_ that simple, then you already have well-delineated modules that your teams can focus on. Deployment is just about integrating the latest stable version of each module, and the decision process within each team needs only respect the contract around the interface they provide -- same as a micro-service.

Saying you need a micro-service architecture to keep a sane internal structure to your application is a symptom that you need to review your engineering practices, because people aren't respecting the interfaces, and it's throwing out the baby with the bathwater.

[+] rcconf|10 years ago|reply
We've had a lot of issues with micro services. They're extremely difficult to debug and end up being too generic. I argue the reasons why we introduced microservices has merit (massive code base) but does not mean they are simple to use/maintain.

Here are some examples:

1. A chat service (race conditions, synchronization issues between the game server and chat service that had to be debugged using sequence diagrams)

2. A payment service that handled Facebook, PayPal, and other payment methods. (race conditions, complicated integration, difficulty upgrading for different products, complicated code base to handle multiple use cases)

3. An authentication service (complicated protocol, hard to integrate)

4. A worker service for handling bcrypt (because blocking a single threaded server for 0.5 seconds is not acceptable, race conditions)

5. A tracking service (could have probably just been a library you included instead of hitting an API)

Core issues:

- Race conditions

- Synchronization issues

- Complicated to upgrade/maintain for multiple products

- Become too generic and solve multiple problems (services tend to be used for multiple products in the company)

- Extremely difficult to debug

- Complicated error handling for when a service is not reachable

pro-tip: don't share databases between multiple products for each service, deploy a new service for every product with its own database.

I think I'm scratching the surface here, but I would be really careful with introducing this kind of architecture when you can do it all in one server.

[+] debacle|10 years ago|reply
I don't mind to sound crass, but it sounds like you implemented microservices poorly. You could s/microservices/threads/g on your post above and it would point much more to a problem with your implementation rather than a problem with threads.
[+] abritishguy|10 years ago|reply
I work for a startup bank in the UK (Mondo). Our Go microservices architecture allows us to maintain velocity whilst staying secure.

The core banking services that actually move money around are isolated from the more "fluffy" customer facing ones that we want to be able to push updates to several times a day. We have to have incredibly rigorous procedures for updating services that control money, if we had a monolithic architecture then these procedures would have to be used even if the change was simply cosmetic.

[+] DanielBMarkham|10 years ago|reply
Being a new "convert" to microservices, and coming from a classic OOA/D/P background, I read these critiques with great interest. I keep waiting for one that shows me what I'm missing.

What I'm finding, however, is that many of these authors have such a broad understanding of microservices are that they miss any benefit. Then, of course, they complain about there not being any benefit, natch.

I suspect -- and what I have feared -- is that the term "microservices" has been co-opted and rebranded by a variety of vendors and proponents. The goal here is to sell products and services, not necessarily solve problems.

There is certainly a huge wheel of hype in the technology world, where things become cool, then old hat, then abused, then nobody does them anymore, then they return under a new buzzword.

Having said that, in the future I'm going to always make a point of defining exactly what I mean instead of just using the term "microservice". My current definition is something like this:

- Pure FP

- Unix Philosophy

- Each microservice has less than 150LOC

- Common code, types, and persistence functions are moved to shared libraries to reduce interop concerns

Not sure if this makes a difference in the discussion, but I know that it will help me keep straight whether various authors actually have quibbles with microservices -- or are just re-applying their pre-existing OO thinking to a place where it doesn't necessarily map so well.

[+] vskarine|10 years ago|reply
don't you think 150LOC is a bit extreme on the low side? I don't see how this is possible unless you compose services out of other services... and in that case it's probably a nightmare to debug anything. Can you elaborate a bit on types of things that this worked great for you?
[+] cpitman|10 years ago|reply
Do you subscribe to the idea that services should correspond to "bounded contexts", ie a business domain? And if so, what do you do when a bounded context is too complex to fit into 150LOC?
[+] Thaxll|10 years ago|reply
"- Each microservice has less than 150LOC"

Sounds like a nightmare, you know how much instances / vms you need if you spawn one thing for 150LOC?

[+] debacle|10 years ago|reply
An honest question - has Martin Fowler ever written any code that makes him so much of an authority on software design?

I don't disagree with everything he's said, but what contributions has he made that makes him so much of an authority on how I write code?

[+] jjbiotech|10 years ago|reply
That's an impressive logical jump you've made there. Starting with a man simply writing a blog post with his opinion on software architecture, all the way to implying that he's an authority on writing code.

He never said he was an authority...

[+] Glyptodon|10 years ago|reply
One thing I don't think gets hammered on enough is that if your organization is not large microservices will create a lot of overhead for little gain.

If you have few developers most of the time it's a lot more efficient to have a modular monolith than it is to have 50 entirely different rocks.

Decentralizing teams, decision making, and allowing increased independence of agency within a larger org is great. Trying to do the same thing when you only have 5 people is usually borderline crazy.

[+] _pdp_|10 years ago|reply
🅰s always, there is room for both styles of programming. Some applications simply do not require this type of architecture. Testing a monolith is certainly easier than testing thousands of microservices. Debugging is also a lot easier. Dependency management is also hell of a lot easier.

Yes microservices (or simply services) are very good at large scale applications but they are simply an overkill for any startup project. Why not concentrate your efforts on selling your product first vs building the perfect infrastructure that no one uses? When time comes to upgrade - well upgrade. Amazon is a good example of a large company making the move from monoliths to services and they executed well. You can also do the same thing when it is absolutely required. You know, refactoring!

[+] patrickmay|10 years ago|reply
One advantage of a microservice architecture not mentioned in the article is the ability to scale services independently of other services. Being able to fire up more instances to address a bottleneck is often much simpler than managing threads in a monolith.
[+] ap22213|10 years ago|reply
This is an important point. If you have a system that processes billions of inputs a day along a fairly complex pipeline, you're constantly scaling up and down. It would suck pretty bad if you had to scale mostly vertically.
[+] bozho|10 years ago|reply
that is true in some very rare cases. CPU-intensive bits of the application should definitely be separated, so that they can scale independently. But that's not the main point of microservices.
[+] beberlei|10 years ago|reply
you can often as easily deploy a monolith to multiple servers and grant different subtasks different cpu and memory for their work. for me its just a matter of adjust with the supervisord config for each server, how many of which workers start and run on that machine.
[+] exelius|10 years ago|reply
Yeah, my general rule is that unless your production environment is already so complex that you have multiple people who do strictly DevOps work, you should probably stick to a monolith. It's a lot easier to transfer state between multiple deployments of the same monolith than it is to go full microservices.

Microservices are a gigantic pain in the ass. They increase operational complexity significantly, and they will require you to make a lot of investments in building/buying/implementing infrastructure around things like logging, monitoring and config management. Your number of possible variables in QA explodes exponentially, and suddenly you have to worry about things like API versioning.

If all of this sounds easier than dealing with the tech / organizational debt in your monolith, then microservices may be right for you. Otherwise, save yourself the trouble and focus on scaling your business instead of your technology platform.

[+] Sleaker|10 years ago|reply
I'm not advocating for microservices, but I think the blog post fails to understand the difference between patching a single microservice and deploying a single microservice without needing to restart your entire environment stack, and patching a single module in a monolith which requires you to restart the entire monolith... Maybe that's not an actual issue but the author seems to disregard the fact that restarting a single microservice is explicitly different than restarting the entire monolith for the very reason that time may become an actual factor.
[+] lemmsjid|10 years ago|reply
Not disagreeing with you, just adding some color.

We have a monolithic application (well, an application composed of many libraries that have a dependency tree, but live in the same process) that contains individually deployable services that talk to one another over RPC. So when you're patching part of the codebase, you can deploy a branch to the servers that are executing the code you've just changed, thus no need to restart the whole thing. But you still have the benefits of shared libraries, moving functionality between services, collapsing service calls that are no longer performant, adding new calls temporarily, etc.

In this model, you can introduce and remove service calls when it's useful, as opposed to as a result of how the code was put together. You will often do this for performance reasons, but just as often you'll do it for deployment flexibility. When you want to prototype something new, you can create a new service, branch the codebase, and have it talk to the rest of the infrastructure via service calls. When you're done you can keep it as a service, or, just as often, you can collapse the service back into other services so we don't have excessive RPC calls.

When would we consider moving to services backed by different codebases? If our company were to grow so large that highly disparate teams would want to manage their own dependency trees--that's around when it makes sense to me to go microservice. (I not sure I'd call it microservices then, more SOA).

[+] marcosdumay|10 years ago|reply
You know, nearly all software running out there does not need to be online 24x7. And for the tiny exception that needs, reducing restart time is just not the way to get that 99.9% uptime.
[+] thecourier|10 years ago|reply
I have seem cases when a group of stateless monoliths would have solved more efficiently a business case than the micro-services alternative.

I'm not saying micro-services are a fad, because they are great for isolating domains and degradation of service, but that comes with a high amount of boilerplate code for remote invocation and serialization.

I would personally say, if your project has not reached 10,000 lines of code do not go that way. also if you can solve the problem at hand with a cluster of stateless monoliths, keep it simple.

Dear friends, keep at hand the Occam's razor

[+] bullen|10 years ago|reply
The important thing is to improve tools, and with a monolith you can't do that.

So build a distributed PaaS with hot-deploy, then a distributed HTTP database; then you can use microservices without any of the problems mentioned (complexity becomes a non problem since everything is using the same complexity and it very quickly becomes bug free, overhead is removed by PaaS)!

The important step with microSOA is that each developer can choose his tools. That trumps everything else.

[+] csears|10 years ago|reply
I think the choice of monolith vs microservice should be largely driven by the size of your team. Some guidance I heard on a podcast recently (can't recall which) was to only consider microservices if you had more than 50 people trying to deploy code in the same app/system.