A lot of people may not remember this, but the microservices bandwagon was pushed heavily by Heroku, which (at the time) was a ubiquitous and influential platform among devs working at startups.
Why did they care? Because their particular hosting model at the time really couldn't handle complex architecture, so microservices speaking to each other over HTTP was the only way to do things.
The arguments in favor of microservices at least sounded good enough that they were adopted at a lot of orgs before anyone actually needed the supposed benefits, and it turned into a big mess.
Yep. Microservices were pushed hard by Martin Fowler who gives terrible programming advice that always yields extremely lucrative returns for cloud platforms.
Same with the whole "serverless" thing heavily pushed by Amazon and co which conveniently offer "serveless services". These trends are always pushed by marketing departments at SAAS companies.
I’m fascinated that the service architecture ecosystem seems so fixated with polarity.
The right answer is pretty much always medium sized services and medium sized repos. I guess that isn’t edgy enough to make people feel like they are doing something cool
Engineering types often prefer to think in hard rules…either an approach is absolutely better or absolutely worse.
Saying medium-size services are best means that the engineer needs to use their judgment to determine where that threshold is. It becomes more of an art than a science, which many engineers are deeply-uncomfortable with, so they stick to their dogmas.
IMO the best way to split things is either by scalability, or user needs, or sometimes by domain (but this is too nuanced to fit into a comment). I suppose there might be security or certification/regulatory requirements too, but also those are probably rare.
Let's verify that: Kubernetes is not all useful. One can do very good without Kubernetes.
Let'em come and tell me I need Kubernetes everywhere! To many people think it's black/white and that there is no grey in between. And more often I realize that the grey is even larger than any of the (hard) sides.
I think he makes some good points, especially about how a lot of things really should be libraries. I'm also not sure I really understand the semantics of it all though. Because what is a color microservice? Something you call to get a color? Do organisations actually make stuff like that? That's wild.
I can only speak for the places that I have worked, but I think that what he calls an "App" is what would likely get called a "Microservice" in a lot of places.
From context, it seems to be a well known 'quintessential extreme example' of when you have obviously gone way too far into microservices. So it's probably something super trivial like coloring text maybe? Or maybe convert between color spaces?
The main problem with microservices is simply that most people don't understand how to design them correctly.
A poorly designed monolith is pretty much always going to be superior to a poorly designed microservices architecture. The floor protecting you from bad design is significantly higher.
The biggest mistake is that people tend to create services that are far too small, and too many of them. Then it gets chatty and you develop a distributed spaghetti code. Or failing to divide them into coherent domains. Or overindexing on scaling individual functions/cost rather than logical boundaries
You should almost always err towards services that are too large rather than too small. You can always subdivide further later on.
I interview tons of senior engineers and managers from reputable companies that describe designs that would produce immense technical debt
I have found that its more the dogma to put up microservices for everything that spoils the pot.
My take on it is that we should have called is "distributed services" because micro implies that they have to be small.
Usually there are clear borders in your app of stuff that can me compartementalized, but they usually should be called medium services instead.
Also - your database is a microservice - design your data acces layer accordingly, you dont need an api endpoint, where a stored procedure (or the like) will do.
> My take on it is that we should have called is "distributed services" because micro implies that they have to be small.
> Usually there are clear borders in your app of stuff that can me compartmentalized, but they usually should be called medium services instead.
These are excellent points and probably loosely coincide with how many teams you might have working on different parts of the greater system as well.
> ...you dont need an api endpoint, where a stored procedure (or the like) will do.
I'm not sure about this. Outside of niche cases and domains (like batch processes), stored procedures are harder to develop, harder to test, harder to debug, harder to audit (or at least less convenient to, when you want to ship your logs with something like GELF) and also have worse tooling for working with them, ensuring code style rules and so on.
Do you debug stored procedures?
47% Never
44% Rarely
9% Frequently
Do you have tests in your database?
14% Yes
70% No
15% I don't know
Do you keep your database scripts in a version control system?
54% Yes
37% No
9% I don't know
Do you write comments for the database objects?
49% No
27% Yes, for many types of objects
24% Yes, only for tables
How would you feel about code that about half of people have never debugged properly, almost tree quarters of people don't bother testing, only about half uses version control for and about half doesn't even document? Because as far as I can tell, this matches my experience when working with the majority of databases out there, either the development culture or tooling (or the entire architecture of those) isn't there yet, and it might never be.
While database design should be a major consideration that begs attention, I personally maintain that the majority of the functionality should go into the app, regardless of whether you use an ORM or not. An exception to this would be using plenty of database views to make querying easier and putting consideration towards how one could handle enums and such, so you don't end up with a database that's not possible to understand without reading the app source code in another window.
How long is the half-life of architectural best practice?
It seems to me that after ~10 years, half the cargo-culted ideas that young developers go all in on fail. Perhaps that's just long enough for them to get older and gain experience.
Perhaps the real issue here is the "thought leaders" who should know better but are perfectly happy to sell their ideas as risk free, when in fact these things are incredibly context sensitive to the team, problem and company shape.
Nobody ever seems to say "Do you believe what you're saying, or are you simply collecting a speakers fee?".
> How long is the half-life of architectural best practice?
This is a great question to ask, though I think one should also apply that same way of thinking to frameworks, languages and a lot of the other software out there (like OS distros or databases).
Sometimes there are good ideas that turn out not to be feasible, whereas the same can happen with the technologies. Whereas if a technology has been around for a large number of years and hasn't died yet, then that's a good predictor for its continued existence: like Linux, or PostgreSQL.
> It seems to me that after ~10 years, half the cargo-culted ideas that young developers go all in on fail. Perhaps that's just long enough for them to get older and gain experience.
Another thing I've noticed is that sometimes the "spirit" of the idea remains, but the technologies to achieve it are way different. For example, we recognized that reproducible deployments are important and configuration management matters a lot, however over time many have gone from using Ansible and systemd services to shipping OCI containers.
These technologies bring their own cruft and idiosyncrasies, of course, but a lot of the time allow achieving similar outcomes (e.g. managing your configuration in an Ansible playbook, vs passing it in to a 12 Factor App through environment variables and config maps).
Of course, sometimes they bring an entirely new set of concepts (and capabilities/risks) that you need to think about, which may or may not be intended or even positive, depending on what you care about.
It would be interesting to know how long it took things like "making tall buildings" or "making safe bridges" took to shake out into roughly what they are today. I suspect, for example, the project planning aspect is pretty stable.
I had my own hell with Microservices during my last big-corp job. Having to deal with API migrations was a nightmare, because other teams never wanted to spend the time to switch to the newer versions, and it was hard to even figure out where in their codebase the API calls were being made from.
There is definitely still a lot of opportunity to breach the boundary between development and production to understand distributed systems
> it was hard to even figure out where in their codebase the API calls were being made from.
Could this have been working around with https://www.jaegertracing.io/? That was one of the requirements at the organization I worked out.
We didn't go as far as to block traffic if the request didn't have a parent span (which included the service name calling it) but if it became a problem I'm sure we could have/would have done that (like 200+ microservices for fintec/banking across like 10-15 teams) and figured out pretty quickly in CAT/UAT/QA environment what wasn't passing a span.
Can’t help but think this is what Elon Musk is facing at Twitter where he’s looking to cut the ‘microservices bloat’ [0] but the complexity of a 1000+ microservice architecture if not built properly, turning off a seemingly innocuous microservice can cause cascading issues and outages. Not to mention he fired a bunch of folks - some of those microservices are likely to not have owners anymore
I don't buy any argument that claims one is better than the other. Most of the projects start as monolith, but as time goes on, the project turns into a big ball of mud. There may be a few exceptions, projects that had strong tech leads to enforce boundaries. Have you run into projects where the test suite takes hours to run? So, at some point people decide to enforce the boundaries at the service level, by splitting the monolith to services. Maybe in the future communication between services will become a bottleneck. This might be a good trade off from some perspective. You reduce the blast radius of bad decisions from engineers and make it easier to rewrite services that were done poorly.
> Most of the projects start as monolith, but as time goes on, the project turns into a big ball of mud.
Start as microservices and you get a bigger and dirtier ball of mud. Guaranteed. Because you have no notion about boundaries at start. Monolith at least have luxury of automated refactorings.
The problem is that I see microservices adopted for two very different reasons:
1) We need to split this thing out because we need to isolate it
Problematic. Generally the issue is political--not technical. The service is necessary but not getting enough support--isolate it to force fixes. Some service not being responsive enough to internal customers--isolate so tickets now can be assigned and blamed on them. etc.
2) We need to split this thing out because it's a perfromance bottleneck and we need to scale it
A reasonable choice. The people scaling something need to be able to bound and limit the scope or they'll never make any progress.
I kinda disagree. I've scaled plenty of code by just adding another monolithic app server. It rarely matters where the load is coming from, another pile of resources evens it out.
But if you have a problematic part of your codebase that regularly shits the bed, keeping it in your monolith will take it all down. Splitting it out is the smart thing.
Couldn’t agree more. I also really like his idea of apps. Micro services introduces operational complexity and overhead. Most of the time, I see teams using it by default that really should be a monolith.
I personally view anyone trying to design an application from the start using microservice architecture to be doing premature optimization. Microservices are meant to solve a scaling problem.
Yes, design your applications in a modular way such that it becomes readily possible to stub parts of it out when the time comes. Don't start with a microservice focus or you're just begging someone else to beat you to market.
I have an inkling that Netflix wasted a lot of their potential with all this microservices mess. They had to hire very experienced engineers and pay them more than anyone else because that is the only way all those microservices could be maintained.
Hmmm, technologically they've always seemed fine. Even now many of the other streaming services struggle to do that: stream video. I've never had serious hiccups with Netflix that I can recall. CBS or HbBO though? Ugh. Also their FreeBSD servers always seemed really impressive.
The headline might be a bit misleading, depending on what he intended to say. I don't read his thread as "The biggest architectural mistake at GitHub was..", more so that it was a mistake to more generally bet 100% on it, not they they specifically did. The headline makes a close association between Github and this rant, but the thread doesn't really mention GitHub
Our backend is microservice based because it's a thin layer over storage. There are functions that do processing work, but those exist so the thin layer works.
The big problem with microservices is handling failure. If you are chaining microservices together you're screwed. Thats why stuff like graphql exists.
That said, the reason microservices exist is because bloat and dependencies eventually choke monoliths. Also monoliths are difficult to scale in a cost-effective way.
Not my experience at all. We have zero problems maintaining large 20+ year monoliths written in C++. It’s all about how you architecture the monoliths and how you refactor as needed to improve it.
As a user of several internal websites written as microservices I can also say they are a mistake. You keep having to hit [Shift]+reload to get an accurate view of the current state. Each page takes ages to update. They break far more frequently than regular sites.
Of course an internal website with probably at most 1000 users should never have been written using microservices in the first place.
I think you're mistaking the front end organisation -vs- microservices which are a backend thing. Where the service lays on the monolith - microservices spectrum doesn't have to be visible externally.
[+] [-] smt88|3 years ago|reply
Why did they care? Because their particular hosting model at the time really couldn't handle complex architecture, so microservices speaking to each other over HTTP was the only way to do things.
The arguments in favor of microservices at least sounded good enough that they were adopted at a lot of orgs before anyone actually needed the supposed benefits, and it turned into a big mess.
[+] [-] datalopers|3 years ago|reply
[+] [-] throw_m239339|3 years ago|reply
[+] [-] mi_lk|3 years ago|reply
[+] [-] MuffinFlavored|3 years ago|reply
What is defined as complex architecture/could you elaborate?
Isn't it usually "a bunch of microservices talking to each other" is complex?
[+] [-] mountainriver|3 years ago|reply
The right answer is pretty much always medium sized services and medium sized repos. I guess that isn’t edgy enough to make people feel like they are doing something cool
[+] [-] nfw2|3 years ago|reply
Saying medium-size services are best means that the engineer needs to use their judgment to determine where that threshold is. It becomes more of an art than a science, which many engineers are deeply-uncomfortable with, so they stick to their dogmas.
[+] [-] spoiler|3 years ago|reply
[+] [-] chrisandchris|3 years ago|reply
Let'em come and tell me I need Kubernetes everywhere! To many people think it's black/white and that there is no grey in between. And more often I realize that the grey is even larger than any of the (hard) sides.
[+] [-] EnKopVand|3 years ago|reply
I can only speak for the places that I have worked, but I think that what he calls an "App" is what would likely get called a "Microservice" in a lot of places.
[+] [-] wg0|3 years ago|reply
[+] [-] KVFinn|3 years ago|reply
[+] [-] jrumbut|3 years ago|reply
[+] [-] adam_arthur|3 years ago|reply
A poorly designed monolith is pretty much always going to be superior to a poorly designed microservices architecture. The floor protecting you from bad design is significantly higher.
The biggest mistake is that people tend to create services that are far too small, and too many of them. Then it gets chatty and you develop a distributed spaghetti code. Or failing to divide them into coherent domains. Or overindexing on scaling individual functions/cost rather than logical boundaries
You should almost always err towards services that are too large rather than too small. You can always subdivide further later on.
I interview tons of senior engineers and managers from reputable companies that describe designs that would produce immense technical debt
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] jhoelzel|3 years ago|reply
My take on it is that we should have called is "distributed services" because micro implies that they have to be small.
Usually there are clear borders in your app of stuff that can me compartementalized, but they usually should be called medium services instead.
Also - your database is a microservice - design your data acces layer accordingly, you dont need an api endpoint, where a stored procedure (or the like) will do.
[+] [-] KronisLV|3 years ago|reply
> Usually there are clear borders in your app of stuff that can me compartmentalized, but they usually should be called medium services instead.
These are excellent points and probably loosely coincide with how many teams you might have working on different parts of the greater system as well.
> ...you dont need an api endpoint, where a stored procedure (or the like) will do.
I'm not sure about this. Outside of niche cases and domains (like batch processes), stored procedures are harder to develop, harder to test, harder to debug, harder to audit (or at least less convenient to, when you want to ship your logs with something like GELF) and also have worse tooling for working with them, ensuring code style rules and so on.
Just look at the reality in many places: https://www.jetbrains.com/lp/devecosystem-2021/databases/#Da...
How would you feel about code that about half of people have never debugged properly, almost tree quarters of people don't bother testing, only about half uses version control for and about half doesn't even document? Because as far as I can tell, this matches my experience when working with the majority of databases out there, either the development culture or tooling (or the entire architecture of those) isn't there yet, and it might never be.While database design should be a major consideration that begs attention, I personally maintain that the majority of the functionality should go into the app, regardless of whether you use an ORM or not. An exception to this would be using plenty of database views to make querying easier and putting consideration towards how one could handle enums and such, so you don't end up with a database that's not possible to understand without reading the app source code in another window.
[+] [-] georgebarnett|3 years ago|reply
It seems to me that after ~10 years, half the cargo-culted ideas that young developers go all in on fail. Perhaps that's just long enough for them to get older and gain experience.
Perhaps the real issue here is the "thought leaders" who should know better but are perfectly happy to sell their ideas as risk free, when in fact these things are incredibly context sensitive to the team, problem and company shape.
Nobody ever seems to say "Do you believe what you're saying, or are you simply collecting a speakers fee?".
[+] [-] KronisLV|3 years ago|reply
This is a great question to ask, though I think one should also apply that same way of thinking to frameworks, languages and a lot of the other software out there (like OS distros or databases).
Probably while looking at the Gartner hype cycles: https://en.wikipedia.org/wiki/Gartner_hype_cycle
Sometimes there are good ideas that turn out not to be feasible, whereas the same can happen with the technologies. Whereas if a technology has been around for a large number of years and hasn't died yet, then that's a good predictor for its continued existence: like Linux, or PostgreSQL.
> It seems to me that after ~10 years, half the cargo-culted ideas that young developers go all in on fail. Perhaps that's just long enough for them to get older and gain experience.
Another thing I've noticed is that sometimes the "spirit" of the idea remains, but the technologies to achieve it are way different. For example, we recognized that reproducible deployments are important and configuration management matters a lot, however over time many have gone from using Ansible and systemd services to shipping OCI containers.
These technologies bring their own cruft and idiosyncrasies, of course, but a lot of the time allow achieving similar outcomes (e.g. managing your configuration in an Ansible playbook, vs passing it in to a 12 Factor App through environment variables and config maps).
Of course, sometimes they bring an entirely new set of concepts (and capabilities/risks) that you need to think about, which may or may not be intended or even positive, depending on what you care about.
[+] [-] tyingq|3 years ago|reply
[+] [-] BeefWellington|3 years ago|reply
A bunch of best practices around data handling, security, and general application architecture for specific use cases is useful.
Do my great new idea now or lose out is really awful and I've witnessed that kind of belief do irreparable harm to companies.
There's some parallel here to draw between the "new tech hotness" that appears every few years and the dancing fever plagues.
[+] [-] Nican|3 years ago|reply
There is definitely still a lot of opportunity to breach the boundary between development and production to understand distributed systems
[+] [-] MuffinFlavored|3 years ago|reply
Could this have been working around with https://www.jaegertracing.io/? That was one of the requirements at the organization I worked out.
We didn't go as far as to block traffic if the request didn't have a parent span (which included the service name calling it) but if it became a problem I'm sure we could have/would have done that (like 200+ microservices for fintec/banking across like 10-15 teams) and figured out pretty quickly in CAT/UAT/QA environment what wasn't passing a span.
[+] [-] hw|3 years ago|reply
[0] https://twitter.com/elonmusk/status/1592177471654604800
[+] [-] ananthakumaran|3 years ago|reply
[+] [-] bvrmn|3 years ago|reply
Start as microservices and you get a bigger and dirtier ball of mud. Guaranteed. Because you have no notion about boundaries at start. Monolith at least have luxury of automated refactorings.
[+] [-] deterministic|3 years ago|reply
[+] [-] Genbox|3 years ago|reply
[+] [-] bsder|3 years ago|reply
1) We need to split this thing out because we need to isolate it
Problematic. Generally the issue is political--not technical. The service is necessary but not getting enough support--isolate it to force fixes. Some service not being responsive enough to internal customers--isolate so tickets now can be assigned and blamed on them. etc.
2) We need to split this thing out because it's a perfromance bottleneck and we need to scale it
A reasonable choice. The people scaling something need to be able to bound and limit the scope or they'll never make any progress.
The issue is that #2 is VASTLY rarer than #1.
[+] [-] bcrosby95|3 years ago|reply
But if you have a problematic part of your codebase that regularly shits the bed, keeping it in your monolith will take it all down. Splitting it out is the smart thing.
[+] [-] youngtaff|3 years ago|reply
“If this services fails we can carry on working with a degraded experience”, notifications on Twitter et al are probably one example of this
[+] [-] nowherebeen|3 years ago|reply
[+] [-] BeefWellington|3 years ago|reply
Yes, design your applications in a modular way such that it becomes readily possible to stub parts of it out when the time comes. Don't start with a microservice focus or you're just begging someone else to beat you to market.
[+] [-] layer8|3 years ago|reply
[+] [-] ilrwbwrkhv|3 years ago|reply
[+] [-] elcritch|3 years ago|reply
[+] [-] brailsafe|3 years ago|reply
[+] [-] lakomen|3 years ago|reply
Apart from that THANK YOU Jason. Finally someone with some common sense!
And for all you tech recruiters hiring with microservices and cloud native , get a fucking grip already.
[+] [-] cryptos|3 years ago|reply
[+] [-] ornornor|3 years ago|reply
[+] [-] mannyv|3 years ago|reply
Our backend is microservice based because it's a thin layer over storage. There are functions that do processing work, but those exist so the thin layer works.
The big problem with microservices is handling failure. If you are chaining microservices together you're screwed. Thats why stuff like graphql exists.
That said, the reason microservices exist is because bloat and dependencies eventually choke monoliths. Also monoliths are difficult to scale in a cost-effective way.
[+] [-] mbrodersen|3 years ago|reply
[+] [-] rwmj|3 years ago|reply
Of course an internal website with probably at most 1000 users should never have been written using microservices in the first place.
[+] [-] viraptor|3 years ago|reply
[+] [-] googletron|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]