Honestly, we originally did microservices because it sounded like a fun idea and because it would look really cool in our marketing materials. At the time, this was a very shiny new word that even our non-tech customers were dazzled by.
As oxidation and reality set in, we realized the shiny thing was actually a horrific distraction from our underlying business needs. We lost 2 important customers because we were playing type checking games across JSON wire protocols instead of doing actual work. Why spend all that money for an expensive workstation if you are going to do all the basic bullshit in your own brain?
We are now back into a monolithic software stack. We also use a monorepo, which is an obvious pairing with this grain. Some days we joke as a team about the days where we'd have to go check for issues or API contract mismatches on 9+ repositories. Now, when someone says "Issue/PR #12842" or provides a commit hash, we know precisely what that means and where to go to deal with it.
Monolithic software is better in literally every way if you can figure out how to work together as a team on a shared codebase. Absolutely no software product should start as a distributed cloud special. You wait until it becomes essential to the business and even then, only consider it with intense disdain as a technology expert.
The problem is that there are so many developers now who have never had any experience of anything that isn't some botched attempt at microservices. The idea that it's possible to encapsulate code and separate concerns in any other way is foreign to them, and an "API" to them is 100% synonymous with a rest/grpc interface. So there's nothing for them to revert to and they are doomed to repeat this pattern, clearly with the impression that this is what app development is.
Meanwhile a lot of the industry is trying to tell them that their problem is they haven't separated things enough and should be using lambdas for everything.
Couldn't agree more - just like NoSQL vs regular old SQL - don't assume you need a NoSQL solution, until you actually prove you need it. probably 90-97% of solutions will be better off with a relational database; boring yes, but they 'just work'. Choose NoSQL only when you need them, choose micro services the same way.
If you 'think' you need NoSQL or a MicroService architecture - chances are you don't.
I think one mistake is using the term “monolithic” in comparison to “microservices”. To a whole new generation of programmers (and a fair proportion of older ones also) it has a negative meaning somewhere approaching “a huge and growing mess of smelly unstructured, non-modular code”. It is a negative term invented to compare against the glory that be “microservices”.
I prefer to use the term “Local Modular Application” in lieu of anything better?
Over the past 10 years dev tools (eg. git, IDEs, laptops) have also become more performant by some small constant factor, 2-3x, which isn't massive in comparison to Moore's law stuff, but means that a larger fraction of new software projects can start monolithic and stay monolithic for longer.
Unless you're handling millions of lines of code and 1000s of developers, you won't need to abandon a monorepo or commit to expensive custom dev-platform investments.
I go back and forth on this. Having a single service is definitely much easier. However, consider the following scenario: you want an application that asynchronously ingests data from different sources (whether they are files, streams, database stores, etc.), perform some computations and aggregations, store them to a local DB, then allow the user to display the result. Would it make sense to separate out the ingestion code into its own service, that calls out to another service that is responsible for storing to the local DB? The rationale is that writing to the DB is now controlled through one service (almost like a queue, where connections could be controlled), and the ingestion could be scaled up/down independently of the DB write service. Thoughts?
Your problem was using JSON instead of a typed language like Protocol Buffers or even Java RPCs, and that you were using separate repos for everything.
Microservices don't even have to run in separate binaries, let along separate machines.
I was fortunate to have a similar experience early on. That being said, I still firmly believe there are cases where "microservices" are appropriate, such as when you married to a specialized framework that only supports Python 3.5 but you want to use modern tooling for everything else as much as possible :)
I'm a huge proponent of microservices, having worked on one of the earliest and largest ones in the cloud. And I absolutely think that they provide huge advantages to large companies -- smaller teams, easier releases, independently scaling, separation of concerns, a different security posture that I personally think is easier to secure, and so on.
It's not a surprise that most younger large enterprises use microservices, with Google being a notable exception. Google however has spent 10s, possibly 100s of millions of dollars on building tooling to make that possible (possibly even more than a billion dollars!).
All that being said, every startup I advise I tell them don't do microservices at the start. Build your monolith with clean hard edges between modules and functions so that it will be easier later, but build a monolith until you get big enough that microservices is actually a win.
I saw lots of churn working on microservices that were pre-production. When it’s like this, things are more tightly coupled than the microservice concept would have you believe and that causes additional work. Instead of writing a new function at a higher version, you had to go change existing ones - pretty much the same workflow as a monolith but now in separate code bases. And there wasn’t a need for any of these microservices to go to production before the front end product, so we couldn’t start incrementing the versioning for the API endpoints to avoid changing existing functions. A monolith almost doesn’t need API versioning for itself (usually libraries do that), but it’s effectively a version 1.0 contract if translated to microservices.
> they provide huge advantages to large companies […] every startup I advise I tell them don't do microservices at the start.
I think you nailed it. Microservices are a solution for organizational problems that arise when the company grow in size, unfortunately it's not rare to see small startups with a handful of engineers and 5 to 10 times more services…
Strongly agree with this. It's about leaning into Conway's law. How micro the services get is a variable, for sure, but it's definitely worth considering as partly technical and principally an organizational problem.
With good defaults, you can have a dev tools / platform team create a blessed path that most teams will easily adopt so you get a mostly standardized internal architecture (useful for mobility). It's harder to allow for lessons learned from one service team to transition to the org as a whole, but if the dev tools / platform team has great Principal SWEs, it'll work. It does mean that you need great people on the platform team, though, since mediocre people will attempt to freeze development to fixed toolchains and will be unable to see the big picture.
I think Amazon does a good job with their Principals here.
> Build your monolith with clean hard edges between modules and functions so that it will be easier later,
This is unfortunately very easy to override. Oh the rants I could write. If I could go back in time we would've put in a ton of extra linting steps to prevent people casually turning private things public* and tying dependencies across the stack. The worst is when someone lets loose a junior dev who finds a bunch of similar looking code in unrelated modules and decides it needs to be DRY. And of course nobody will say no because it contradicts dogma. Oh and the shit that ended up in the cookies... still suffering it a decade later.
*This is a lot better with [micro]services but now the code cowboys talk you into letting them connect directly to your DB.
>...you get big enough that microservices is actually a win.
Can you speak more about the criteria here?
You may be implying that microservices enforce Conway's law. If so then when the monolith divides, it "gives away" some of it's API to another name, such that the new node has it's own endpoints. This named set is adopted by a team, and evolves separately from that point on, according to cost/revenue. The team and its microservice form a semi-autonomous unit, in theory able to evolve faster in relative isolation from the original.
The problem from the capital perspective is that you get a bazillion bespoke developer experiences, all good and bad in their unique and special ways, which means that the personal dev experience will matter, a guide in the wilderness who's lived there for years. The more tools are required to run a typical DX, the more tightly coupled the service will be to the developers who built it. This generally favors the developer, which may also explain why the architecture is popular.
> Build your monolith with clean hard edges between modules and functions so that it will be easier later, but build a monolith until you get big enough that microservices is actually a win.
I'd like to see software ecosystems that make it possible to develop an application that seems like a monolith to work with (single repository, manageable within a seamless code editing environment, with tests that run across application modules) and yet has the same deployment, monitoring and scale up/out benefits that microservices have.
Ensuring that the small-team benefits would continue to exist (comparative to 'traditional' microservices) in that kind of platform could be a challenge -- it's a question of sensibly laying out the application architecture to match the social/organizational structure, and for each of those to be cohesive and effective.
Going from working at a company with an engineering team of 40 odd engineers to a company with thousands of engineers where the small company was trying to move towards microservices and it was just really slowing us down and the large company was in a hybrid mode of still having a couple of monoliths and a lot of microservices I could definitely appreciate that there is very much a scale at which it absolutely makes sense for the engineering organisation to use microservice and very much is a scale below which it's fairly counter-productive.
I find the organizational arguments to be pretty convincing, but surely there must be a way to reap these rewards in a monolithic infra setup as well? Maybe someone should develop a "monolith microservice architecture" where all the services are essentially (and enforced to be) isolated, but once deployed is built like a single unit.
You could do it with docker-compose I guess, but optimally your end result would be a single portable application.
The last microservice architecture I worked on consisted of 7 python repositories that shared one "standard library" repository. Something as simple as adding an additional argument to a function required a PR to 7 repos and sign off on each. When it came to release we had to do a mass docker swarm restart and spin up because the giant entanglement of micro-services was really just a monolithic uncontrolled Cthulhu in disguise.
The business revolved around filling out a form and PDF generations of said form. I felt like I got no work done in a year and so I left
So... you worked on something terrible that people called a "microservice architecture"? Once a pattern gets popular, people start writing nasty code in the style, and then the pattern takes the reputation hit and people move on to the next thing (or just back to the last thing). Rinse, repeat.
My company uses microservices; deploys restart one service and PRs are one repo at a time. There's a shared library, but it's versioned and there's nothing compelling you to keep on the bleeding edge.
> I worked on consisted of 7 python repositories that shared one "standard library" repository. Something as simple as adding an additional argument to a function required a PR to 7 repos and sign off on each
This is an engineering process failure, not a failure of microservices or shared dependencies. You should be versioning your shared library, that way you only need to make a deployment to the service that requires the update, leaving the others pegged at the previous version until a business or engineering need motivates the upgrade.
I have had similar experience building microservices that used shared repositories. The PR paperwork was so bad that at one point I've made all my services self-contained, just to avoid having to modify my own code in two different places and synchronize the changes.
The whole problem, I think, comes from the "split the code" cargo-cult. We need to think about why we're splitting the code, and use that why to figure out when to split code.
IMHO, code separation arises naturally from modular programming - once your code is mature enough, it becomes just a piece of glue around a set of libraries that you can just rip out and put in their own repos, provided that they're useful enough.
One upside in dealing with this mess(and understanding all the intricacies, pains etc..) were in is being able to laugh at the kafkaesque situations people find themselves in now, like your story or other blogposts.
This is blowing my mind. I felt like you were specifically talking about my place of work. Turns out (based on your github link in your profile), you did work there haha!
The 7 repos are gone now and a k8s cluster has replaced the swarm service. Deployments are a bit easier to manage now. It's still such slow development process to add a simple API endpoint, especially if that API has to call other DAOs. It's crazy because I feel like it's completely normalized here that dev work takes 300% longer than it should for simple features.
The reason why "we" are doing this is because in a lot of cases, the tech is no longer used as a means to solve a business problem - instead, the tech is the end-game itself and complexity is a desired feature.
The market has been distorted by endless amounts of VC funding for very dubious ideas that would never be profitable to begin with, so now you have 2 solutions: you can spend a few hundred grand building a boring solution with a slim amount of engineers, realize it doesn't work and quit, or you can keep over-engineering indefinitely, keep raising more and more millions, enjoying the "startup founder" lifestyle while providing careers to unreasonable amounts of engineers with no end in sight because you're too busy over-engineering rather than "solving" the business problem, so the realization that the business isn’t viable never actually comes.
Which one do you pick? The market currently rewards the second option for all parties involved, so it's become the default choice. Worse, it’s been going on long enough that a lot of people in the industry consider this normal and don’t know any other way.
Before microservices were a thing, I had the chance to work on a couple of telecom systems written in Erlang/OTP, but it wasn't until years later that I realized we were already doing most of the things people were using microservices for, with the single exception of being polyglot (although Elixir and Gleam are starting to challenge that).
Small teams were dealing with specific functionality, and they were largely autonomous as long as we agree upon the API, which was all done via Erlang's very elegant message passing system. Scalability was automatic, part of the runtime. We had system-wide visibility and anyone could test anything even on their own computers. We didn't have to practice defensive programming thanks to OTP, and any systemic failure was easier to detect and fix. Updates could be applied in hot, while the system was running, one of the nicest features of the BEAM, that microservices try to address.
All the complexity associated with microservices, or even Kubernetes and service meshes, are ultimately a way to achieve some sort of "polyglot BEAM". But I question if it's really worth it for all use cases. A lot of the "old" technology has kept evolving nicely, and I'd be perfectly fine using it to achieve the required business outcomes.
I found microservices had the benefit of increasing release cadence and decreasing merge conflicts.
Are there complications? Sure. Are they manageable? Relatively easily with correct tooling. Do microservices (with container management) allow you better use of your expensive cloud resources? That was our experience, and a primary motivator.
I also feel they increase developer autonomy, which is very valuable IMO.
Decreasing merge conflicts sounds more like muting and/or deferring problems.
Microservice fanaticism seems to be coupled with this psychosclerotic view that world can exist in state of microservices or as monolith.
From what I've seen in last 20+ years, if I had to pick one sentence to describe fit-all enterprise setup (and it's as stupid as saying "X is the best" without context) - it'd be monorepo with a dozen or two services, shared libraries, typed so refactoring and changes are reliable and fast, single versioned, deployed at once, using single database in most cases - one setup like this per-up-to-12 devs team. Multiple teams like this with coordinated backward compatibility on interfaces where they interact.
If you're going to complain about something you need to present some data that backs up your point, this is just a bunch or rambling opinions.
A lot of software engineering is about managing modularization, I've lived through structured programming, OOx, various distributed object schemes and now this. Basically all these mechanisms attempt to solve the problem of modularization and reuse. So, the fact that new solutions to the old problems appear just means that it's a hard problem worth working on. I'd say use every single technique when it's appropriate.
> While microservices talk likes to pretend the solution is some horrific “monolith”, we never really had “monoliths” before in development that I experienced. What we had were some kinds of tiered architectures.
I've worked with with monoliths. The author must not have experienced them. I've worked places that had builds that took hours to run. We had git merges that took days. We had commit histories that were unreadable.
The developer experience working with it was one of CONSTANT frustration. The system was too big to make large changes safely. Incremental changes were too incremental and costly.
Note nowhere in here am I saying that microservice architecture should always be preferred. But the idea that its all just some sort of trend with no real underlying advantage is sort of silly.
Every company I've ever been at with a monolith tends to have "untouchables" of architecture and the original design schematics who understand the system orders of magnitude better than anyone else. That doesn't scale, and really messes with an engineering organization.
There's conways law where software will eventually reflect the organization structure of the company, but there's also a sort of reverse conways law - when you have teams dedicated to specific services you also get to be able to target investments in those teams when their services are not executing well enough.
Yeah I agree, it all breaks down when you need to make large scale changes. Something like updating a core library becomes virtually impossible because there is no half step. You have to fix _everything_ before you can merge and that takes long enough that the merge becomes horrific. I was told that updating Rails at GitHub was a multi year project involving building a compatibility layer so the app could run on both versions at once.
If you're going to embrace microservices, you need to be VERY confident that they solve real problems that you currently have and that they will result in an improvement to your engineering velocity within a reasonable time-frame.
The additional complexity - in terms of code and operations - is significant. You need to be very confident that it's going to pay for itself.
I have been around for a while too and I think I can answer the rethorical question: it's a great fulcrum upon which to build teams and spring careers, and by the time problems have calcified there's been enough turnover or promotions that the reason why they are in place is completely lost.
I do not say this with bitterness accumulated while building them: on the contrary, it's something I've usually realised only much later, when it was too late (and more than once).
Incompetent teams and engineering organizations will find a way to mess up both monoliths and microservices. Great ones will pick what works best for their specific use case and be effective at it.
The only correct answer is to not waste time with the decade+ worth of pointless internet debates on the topic.
There's a degree to which I agree with this, but the advantage monoliths have is the "opinionated" frameworks (chiefly Rails, Django and the like) that hand-hold a less competent team towards a sane design.
In comparison, building a good set of microservices is a minefield of infinite possibilities, with each decision about where a particular responsibility or piece of data should live being quite significant and often quite painful to change your mind about.
> If we think about the fastest way to execute some code, that is a function call, not a web request.
No, the fastest way to execute some code is a goto. Be careful with arguments from performance, that's how you get garbage like a former colleague's monstrous 10k SLOC C(++) function (compiled as C++, but it was really C with C++'s IO routines). Complete with a while(1) loop that wrapped almost the entire function body. When you need speed, design for speed, but you almost always need clarity first. Optimizations can follow.
> If we think about resilient systems, the most resilient systems are the ones with the least number of moving parts. The same is true for the fastest systems.
I suggest care with this argument as well. This would, naively interpreted, suggest that the most resilient system has 1 moving part (0 if we allow for not creating a system altogether). First, this is one of those things that doesn't have a clean monotonically increasing/decreasing curve to it. Adding a moving part doesn't automatically make it less resilient, and removing one doesn't automatically make it more resilient. There is a balance to be struck somewhere between 1 (probably a useless system, like leftpad) and millions. Second, there's a factor not discussed: It's the interaction points, not the number of moving parts themselves, that provides a stronger impact on resilience.
If you have 500 "moving parts" that are linearly connected (A->B->C->D->...), sure it's complicated but it's "straightforward". If something breaks you can trace through it and see which step received the wrong thing, and work backwards to see which prior step was the cause. If you have 500 moving parts that are all connected to each other then you have 500(500-1)/2 interactions that could be causing problems. That's the way to destroy resilience, not the number of moving parts but the complex interaction between them.
Microservices work well if your contracts are well-defined, domain knowledge is limited, your team is under the size of a pizza, and your platform needs are diverse. Eg: Some small teams prefer containers, other prefer managed containers (serverless), others prefer small VM's.
SOA works well if your teams are larger and have larger domain (or end to end, however you want to call it) knowledge.
Monoliths work well when the domain of the application is singular, the team is large, or if you're in prototyping. The big downside for monoliths is that their scaling model must be considered in advance or engineers can tactically corner themselves with architecture. That incurs big, expensive rewrites as well as time.
While Conway's Law may be reflective of the enterprises use of (or overuse of) microservices I think it really has more to do with a different enterprise habit: understaffing and budget constraint. Microservices and client-side applications, from my perspective, very rarely have long-term maintainers. Instead, things get done in cycles and then for most of the year a given service does not receive anything besides some maintenance updates or low-hanging fixes. That makes it look like a microservice is expendable and easier to replace to the people who manage resources, staffing, and budgets. Thus, things now look "modular" to the people who fund the ship that everyone else drives.
I see a lot of people acting like microservices are some conspiracy theory pushed on us engineers. I’ve never worked anywhere that pushed microservices, the places I’ve used them they tended to be additional functionality we could easily decouple from the standard backend. Even if they were I like the idea of microservices, having everything as abstracted away from each other as possible. Also would probably make code easier to onboard, just get a junior up to speed on one service at a time.
As I build out my infrastructure for Adama (my real-time SaaS for state machines), I'm leaning hard into a monolithic design. The key reason is to minimize cost and maximize performance.
For instance, comparing Adama to the services needed to build similar experiences offered by AWS has interesting results. Adama costs 97% less than AWS ( https://www.adama-platform.com/2022/03/18/progress-on-new-da... ), and a key thing is that the microservice approach is amenable to metering every interaction which scales linear to demand whilst a monolithic approaches condenses compute and memory.
I've been at a place where a single person is juggling twenty microservices to power a product with barely any users. Just the infra cost alone makes it insane.
"But one day, when we get massive growth, it will all be worth it", he says.
Alas that day may not come, since he is busy configuring load balancers and message queues instead of developing features.
He made the point that micro-services are a deployment method not an architecture. A good clean architecture shouldn't care how it's deployed. If you need to move from plugins to micro-services to be massively scalable your architecture shouldn't care. If you need to move from micro-services to plugins to make your app simple to host and debug, your architecture should also not care.
This strategy has been implemented in frameworks like abp.io very successfully. You can start your application as a single collection of split assemblies deployed as a single application and move to deploying as micro-services when it's necessary.
Whether you build microservices or just services, distributed systems are undeniably here to stay. In today’s world there are very few products that can run on a single machine, whether it is for latency or availability or redundancy.
That said, the challenges of building such systems are real, and the developer experience is universally quite awful compared to our monolithic, single-server past.
It’s for that reason that I’ve been building [1] for the past four years. Would love your feedback if the OP resonates with you.
This article fails to mention anything about team size which should be the first criteria for any decision about "microservices" (in quotes because it's just new terminology someone made up because service-oriented architecture wasn't cool anymore and they had to prove how original and modern their thinking was). Half of the engineering orgs chasing this fad are <50 people and have absolutely no reason to be adding the overhead of an SOA. The rules of thumb should be if you have more than 1 service per 4 engineers it's too many; and if your total engineering org size is not big enough to support a dedicated infra team of >4 engineers working full-time on tools just to support the other teams then you're not big enough.
Over-complicating things to pad your resume might get you into FAANG but it won't make you a good engineer or a good entrepreneur. The sign of a truly senior engineer is one who knows how to keep things as simple as possible to solve real problems while maximizing power to weight ratio of their code. The resume-driven-development anti-pattern is pretending that the problems facing 1000 or 10000 person orgs are your problems. Those large companies got to where they are by solving the problems in front of them, and you won't get to that scale if you don't do the same.
[+] [-] bob1029|4 years ago|reply
As oxidation and reality set in, we realized the shiny thing was actually a horrific distraction from our underlying business needs. We lost 2 important customers because we were playing type checking games across JSON wire protocols instead of doing actual work. Why spend all that money for an expensive workstation if you are going to do all the basic bullshit in your own brain?
We are now back into a monolithic software stack. We also use a monorepo, which is an obvious pairing with this grain. Some days we joke as a team about the days where we'd have to go check for issues or API contract mismatches on 9+ repositories. Now, when someone says "Issue/PR #12842" or provides a commit hash, we know precisely what that means and where to go to deal with it.
Monolithic software is better in literally every way if you can figure out how to work together as a team on a shared codebase. Absolutely no software product should start as a distributed cloud special. You wait until it becomes essential to the business and even then, only consider it with intense disdain as a technology expert.
[+] [-] ris|4 years ago|reply
Meanwhile a lot of the industry is trying to tell them that their problem is they haven't separated things enough and should be using lambdas for everything.
[+] [-] ejb999|4 years ago|reply
If you 'think' you need NoSQL or a MicroService architecture - chances are you don't.
[+] [-] aryehof|4 years ago|reply
I prefer to use the term “Local Modular Application” in lieu of anything better?
[+] [-] thundergolfer|4 years ago|reply
Unless you're handling millions of lines of code and 1000s of developers, you won't need to abandon a monorepo or commit to expensive custom dev-platform investments.
[+] [-] azth|4 years ago|reply
[+] [-] lupire|4 years ago|reply
Microservices don't even have to run in separate binaries, let along separate machines.
[+] [-] packetlost|4 years ago|reply
[+] [-] jedberg|4 years ago|reply
It's not a surprise that most younger large enterprises use microservices, with Google being a notable exception. Google however has spent 10s, possibly 100s of millions of dollars on building tooling to make that possible (possibly even more than a billion dollars!).
All that being said, every startup I advise I tell them don't do microservices at the start. Build your monolith with clean hard edges between modules and functions so that it will be easier later, but build a monolith until you get big enough that microservices is actually a win.
[+] [-] vsareto|4 years ago|reply
[+] [-] closeparen|4 years ago|reply
Google famously has a monorepo but that's different from a monolithic service architecture.
[+] [-] littlestymaar|4 years ago|reply
I think you nailed it. Microservices are a solution for organizational problems that arise when the company grow in size, unfortunately it's not rare to see small startups with a handful of engineers and 5 to 10 times more services…
[+] [-] renewiltord|4 years ago|reply
With good defaults, you can have a dev tools / platform team create a blessed path that most teams will easily adopt so you get a mostly standardized internal architecture (useful for mobility). It's harder to allow for lessons learned from one service team to transition to the org as a whole, but if the dev tools / platform team has great Principal SWEs, it'll work. It does mean that you need great people on the platform team, though, since mediocre people will attempt to freeze development to fixed toolchains and will be unable to see the big picture.
I think Amazon does a good job with their Principals here.
[+] [-] foobarian|4 years ago|reply
This is unfortunately very easy to override. Oh the rants I could write. If I could go back in time we would've put in a ton of extra linting steps to prevent people casually turning private things public* and tying dependencies across the stack. The worst is when someone lets loose a junior dev who finds a bunch of similar looking code in unrelated modules and decides it needs to be DRY. And of course nobody will say no because it contradicts dogma. Oh and the shit that ended up in the cookies... still suffering it a decade later.
*This is a lot better with [micro]services but now the code cowboys talk you into letting them connect directly to your DB.
[+] [-] javajosh|4 years ago|reply
Can you speak more about the criteria here?
You may be implying that microservices enforce Conway's law. If so then when the monolith divides, it "gives away" some of it's API to another name, such that the new node has it's own endpoints. This named set is adopted by a team, and evolves separately from that point on, according to cost/revenue. The team and its microservice form a semi-autonomous unit, in theory able to evolve faster in relative isolation from the original.
The problem from the capital perspective is that you get a bazillion bespoke developer experiences, all good and bad in their unique and special ways, which means that the personal dev experience will matter, a guide in the wilderness who's lived there for years. The more tools are required to run a typical DX, the more tightly coupled the service will be to the developers who built it. This generally favors the developer, which may also explain why the architecture is popular.
[+] [-] jka|4 years ago|reply
I'd like to see software ecosystems that make it possible to develop an application that seems like a monolith to work with (single repository, manageable within a seamless code editing environment, with tests that run across application modules) and yet has the same deployment, monitoring and scale up/out benefits that microservices have.
Ensuring that the small-team benefits would continue to exist (comparative to 'traditional' microservices) in that kind of platform could be a challenge -- it's a question of sensibly laying out the application architecture to match the social/organizational structure, and for each of those to be cohesive and effective.
[+] [-] throwaway6532|4 years ago|reply
[+] [-] antihero|4 years ago|reply
[+] [-] mekkkkkk|4 years ago|reply
You could do it with docker-compose I guess, but optimally your end result would be a single portable application.
[+] [-] goodpoint|4 years ago|reply
Which ones? Amazon uses roughly 1-team-1-service, not 1-team-100-*micro*services.
Facebook famously built their main service as a monolith.
Edit: and don't get me wrong, I'm not saying services are bad - as long as they are the right size and with the right design rather than tiny.
[+] [-] glouwbug|4 years ago|reply
The business revolved around filling out a form and PDF generations of said form. I felt like I got no work done in a year and so I left
[+] [-] pkulak|4 years ago|reply
My company uses microservices; deploys restart one service and PRs are one repo at a time. There's a shared library, but it's versioned and there's nothing compelling you to keep on the bleeding edge.
[+] [-] root_axis|4 years ago|reply
This is an engineering process failure, not a failure of microservices or shared dependencies. You should be versioning your shared library, that way you only need to make a deployment to the service that requires the update, leaving the others pegged at the previous version until a business or engineering need motivates the upgrade.
[+] [-] paskozdilar|4 years ago|reply
The whole problem, I think, comes from the "split the code" cargo-cult. We need to think about why we're splitting the code, and use that why to figure out when to split code.
IMHO, code separation arises naturally from modular programming - once your code is mature enough, it becomes just a piece of glue around a set of libraries that you can just rip out and put in their own repos, provided that they're useful enough.
[+] [-] AndyPa32|4 years ago|reply
[+] [-] abledon|4 years ago|reply
[+] [-] abledon|4 years ago|reply
[+] [-] willhoyle|4 years ago|reply
The 7 repos are gone now and a k8s cluster has replaced the swarm service. Deployments are a bit easier to manage now. It's still such slow development process to add a simple API endpoint, especially if that API has to call other DAOs. It's crazy because I feel like it's completely normalized here that dev work takes 300% longer than it should for simple features.
[+] [-] umpalumpaaa|4 years ago|reply
[+] [-] Nextgrid|4 years ago|reply
The market has been distorted by endless amounts of VC funding for very dubious ideas that would never be profitable to begin with, so now you have 2 solutions: you can spend a few hundred grand building a boring solution with a slim amount of engineers, realize it doesn't work and quit, or you can keep over-engineering indefinitely, keep raising more and more millions, enjoying the "startup founder" lifestyle while providing careers to unreasonable amounts of engineers with no end in sight because you're too busy over-engineering rather than "solving" the business problem, so the realization that the business isn’t viable never actually comes.
Which one do you pick? The market currently rewards the second option for all parties involved, so it's become the default choice. Worse, it’s been going on long enough that a lot of people in the industry consider this normal and don’t know any other way.
I've commented/ranted about this before, see https://news.ycombinator.com/item?id=30008257, https://news.ycombinator.com/item?id=24926060 and https://news.ycombinator.com/item?id=30272588.
[+] [-] Natales|4 years ago|reply
Small teams were dealing with specific functionality, and they were largely autonomous as long as we agree upon the API, which was all done via Erlang's very elegant message passing system. Scalability was automatic, part of the runtime. We had system-wide visibility and anyone could test anything even on their own computers. We didn't have to practice defensive programming thanks to OTP, and any systemic failure was easier to detect and fix. Updates could be applied in hot, while the system was running, one of the nicest features of the BEAM, that microservices try to address.
All the complexity associated with microservices, or even Kubernetes and service meshes, are ultimately a way to achieve some sort of "polyglot BEAM". But I question if it's really worth it for all use cases. A lot of the "old" technology has kept evolving nicely, and I'd be perfectly fine using it to achieve the required business outcomes.
[+] [-] sarks_nz|4 years ago|reply
Are there complications? Sure. Are they manageable? Relatively easily with correct tooling. Do microservices (with container management) allow you better use of your expensive cloud resources? That was our experience, and a primary motivator.
I also feel they increase developer autonomy, which is very valuable IMO.
[+] [-] mirekrusin|4 years ago|reply
Microservice fanaticism seems to be coupled with this psychosclerotic view that world can exist in state of microservices or as monolith.
From what I've seen in last 20+ years, if I had to pick one sentence to describe fit-all enterprise setup (and it's as stupid as saying "X is the best" without context) - it'd be monorepo with a dozen or two services, shared libraries, typed so refactoring and changes are reliable and fast, single versioned, deployed at once, using single database in most cases - one setup like this per-up-to-12 devs team. Multiple teams like this with coordinated backward compatibility on interfaces where they interact.
[+] [-] zwieback|4 years ago|reply
A lot of software engineering is about managing modularization, I've lived through structured programming, OOx, various distributed object schemes and now this. Basically all these mechanisms attempt to solve the problem of modularization and reuse. So, the fact that new solutions to the old problems appear just means that it's a hard problem worth working on. I'd say use every single technique when it's appropriate.
[+] [-] taurath|4 years ago|reply
I've worked with with monoliths. The author must not have experienced them. I've worked places that had builds that took hours to run. We had git merges that took days. We had commit histories that were unreadable.
The developer experience working with it was one of CONSTANT frustration. The system was too big to make large changes safely. Incremental changes were too incremental and costly.
Note nowhere in here am I saying that microservice architecture should always be preferred. But the idea that its all just some sort of trend with no real underlying advantage is sort of silly.
Every company I've ever been at with a monolith tends to have "untouchables" of architecture and the original design schematics who understand the system orders of magnitude better than anyone else. That doesn't scale, and really messes with an engineering organization.
There's conways law where software will eventually reflect the organization structure of the company, but there's also a sort of reverse conways law - when you have teams dedicated to specific services you also get to be able to target investments in those teams when their services are not executing well enough.
[+] [-] Gigachad|4 years ago|reply
[+] [-] simonw|4 years ago|reply
The additional complexity - in terms of code and operations - is significant. You need to be very confident that it's going to pay for itself.
[+] [-] richardfey|4 years ago|reply
[+] [-] paxys|4 years ago|reply
The only correct answer is to not waste time with the decade+ worth of pointless internet debates on the topic.
[+] [-] ris|4 years ago|reply
In comparison, building a good set of microservices is a minefield of infinite possibilities, with each decision about where a particular responsibility or piece of data should live being quite significant and often quite painful to change your mind about.
[+] [-] Jtsummers|4 years ago|reply
No, the fastest way to execute some code is a goto. Be careful with arguments from performance, that's how you get garbage like a former colleague's monstrous 10k SLOC C(++) function (compiled as C++, but it was really C with C++'s IO routines). Complete with a while(1) loop that wrapped almost the entire function body. When you need speed, design for speed, but you almost always need clarity first. Optimizations can follow.
> If we think about resilient systems, the most resilient systems are the ones with the least number of moving parts. The same is true for the fastest systems.
I suggest care with this argument as well. This would, naively interpreted, suggest that the most resilient system has 1 moving part (0 if we allow for not creating a system altogether). First, this is one of those things that doesn't have a clean monotonically increasing/decreasing curve to it. Adding a moving part doesn't automatically make it less resilient, and removing one doesn't automatically make it more resilient. There is a balance to be struck somewhere between 1 (probably a useless system, like leftpad) and millions. Second, there's a factor not discussed: It's the interaction points, not the number of moving parts themselves, that provides a stronger impact on resilience.
If you have 500 "moving parts" that are linearly connected (A->B->C->D->...), sure it's complicated but it's "straightforward". If something breaks you can trace through it and see which step received the wrong thing, and work backwards to see which prior step was the cause. If you have 500 moving parts that are all connected to each other then you have 500(500-1)/2 interactions that could be causing problems. That's the way to destroy resilience, not the number of moving parts but the complex interaction between them.
[+] [-] kodah|4 years ago|reply
SOA works well if your teams are larger and have larger domain (or end to end, however you want to call it) knowledge.
Monoliths work well when the domain of the application is singular, the team is large, or if you're in prototyping. The big downside for monoliths is that their scaling model must be considered in advance or engineers can tactically corner themselves with architecture. That incurs big, expensive rewrites as well as time.
While Conway's Law may be reflective of the enterprises use of (or overuse of) microservices I think it really has more to do with a different enterprise habit: understaffing and budget constraint. Microservices and client-side applications, from my perspective, very rarely have long-term maintainers. Instead, things get done in cycles and then for most of the year a given service does not receive anything besides some maintenance updates or low-hanging fixes. That makes it look like a microservice is expendable and easier to replace to the people who manage resources, staffing, and budgets. Thus, things now look "modular" to the people who fund the ship that everyone else drives.
[+] [-] edgyquant|4 years ago|reply
[+] [-] mathgladiator|4 years ago|reply
For instance, comparing Adama to the services needed to build similar experiences offered by AWS has interesting results. Adama costs 97% less than AWS ( https://www.adama-platform.com/2022/03/18/progress-on-new-da... ), and a key thing is that the microservice approach is amenable to metering every interaction which scales linear to demand whilst a monolithic approaches condenses compute and memory.
[+] [-] hackerfromthefu|4 years ago|reply
That means that the AWS service based option costs 33 times more. Not ten time more, but thirty plus times more.
[+] [-] mekkkkkk|4 years ago|reply
"But one day, when we get massive growth, it will all be worth it", he says.
Alas that day may not come, since he is busy configuring load balancers and message queues instead of developing features.
[+] [-] tiberriver256|4 years ago|reply
He made the point that micro-services are a deployment method not an architecture. A good clean architecture shouldn't care how it's deployed. If you need to move from plugins to micro-services to be massively scalable your architecture shouldn't care. If you need to move from micro-services to plugins to make your app simple to host and debug, your architecture should also not care.
This strategy has been implemented in frameworks like abp.io very successfully. You can start your application as a single collection of split assemblies deployed as a single application and move to deploying as micro-services when it's necessary.
[+] [-] j_oshi_curu|4 years ago|reply
Regarding division of work, those advocates seem to have forgotten that libraries exists.
Or that you can deploy a monolith and still scale endpoints independently.
Microservices might be a good fit for tiny fraction of real world scenarios.
[+] [-] eandre|4 years ago|reply
That said, the challenges of building such systems are real, and the developer experience is universally quite awful compared to our monolithic, single-server past.
It’s for that reason that I’ve been building [1] for the past four years. Would love your feedback if the OP resonates with you.
[1] https://encore.dev
[+] [-] dasil003|4 years ago|reply
Over-complicating things to pad your resume might get you into FAANG but it won't make you a good engineer or a good entrepreneur. The sign of a truly senior engineer is one who knows how to keep things as simple as possible to solve real problems while maximizing power to weight ratio of their code. The resume-driven-development anti-pattern is pretending that the problems facing 1000 or 10000 person orgs are your problems. Those large companies got to where they are by solving the problems in front of them, and you won't get to that scale if you don't do the same.