This article signals to me that we may have reached ‘peak microservice’ and should now expect a flood of blog posts extolling the virtues of monoliths and lots of industry effort to combine microservices into self-contained monoliths.
This is likely an article for big companies to move their apps to GCP. Google probably wants them to be able to utilize preemptible nodes to reduce cost and with a monolith you can't really set meaningful service levels agreements if you do not have reliable hardware backing you.
I don't think "microservices", the idea of having an interface between two systems/codebases/libraries, will ever die. The more microservice-y bits, seperate apps with RPCs, makes this sort of work much easier.
I'm not so convinced. I'll assert that a service boundary can absolutely exist between two modules running within the same binary image. The distinction comes down to grading your application along a few axes: Ratio of deployable units to bounded contexts, ball of mud versus domain-driven design, release engineering maturity, and a few others that I may be missing.
I'll further assert that "a monolith" is not the antithesis to "microservices architecture." What this article talks about is mainly the ball of mud versus domain-driven design axis, with a nod or two to challenges with a single deployable unit delivering multiple bounded contexts and challenges with developing mature release engineering practices.
What I do see happening is perhaps a reshuffling of in-vogue-architected microservices applications, in perhaps multiple deployment units, to resemble DDD-alike-architected microservices applications in perhaps fewer deployment units than before. Let me explain.
In my experience, a lot of the problems that the article mentions with respect to applications deployed as singular units are representative of underlying architectural, design, and org chart problems. That goes without even mentioning that the term "microservice" tends not to be understood anywhere near as correctly as it really deserves (or needs) to be.
With respect to that last one though, the org chart problems, I've noticed that reorgs and not understanding the application's bounded contexts tend to go hand-in-hand. If you hammer down what your bounded contexts should be, you'll very likely have a much more natural set of boundaries for teams to work all on the same code together. This requires some bridging and "kum-ba-yah" to happen between the business folks and engineering, but in the grander scheme of things, having everyone on the same page seems to be super worth it.
Hammered-down bounded contexts, in turn, will help you define your aggregate roots. This will then give you the ability to scale out your persistence layer along your aggregate root boundaries and help define how to partition your dataset among several data stores. If you're lucky, this will help you kick the proverbial can down the road in terms of staving off the need to shard those data stores. This addresses the design and architecture problems.
Going further, defining proper service interfaces at the boundaries of your bounded contexts will then allow you, at some point when it's truly necessary for the operations overhead to do so, to deploy your application in more than the one unit. Most importantly, however, it'll help with release engineering even in the singular deployment unit case. A service you consume can define an interface that today is a direct call into another module but tomorrow can be an RPC across the network.
Because you have that modular separation, you can maintain a reasonable degree of development velocity because you've divvied up things to teams split along boundaries that make sense given your problem domain. At that point, you can do things like split up the code among separate repositories so that the compartmentalization is a little more tangible, but decisions here mostly rely on the kind of engineering culture you want to foster and how much effort you want to put into release engineering.
I implore you to read Sam Newman's "Building Microservices" and Martin Fowler's "Patterns of Enterprise Application Architecture." Skip Eric Evans' DDD book; even Evans himself says that Newman's book is a better treatise on DDD than his own.
All well and good. Here's something that sounds totally crazy: the future of infrastructure and of software development is in self-programming/self-modifying systems driven by AI to meet a set of requirements. Not buzzword pitchdeck bingo, but systems that can figure out how to optimize, fix and add features to themselves. It's asking a whole lot to get there, but it's inevitable because the cost savings potential is cavernous. There will only be three job left: govt bureaucrat, elder care, and a million applicants vying to be the last remaining engineer who can figure out the latest combinator-based programming language written in BF this machine decided to create on its own.
For non standardized features, I doubt it will happen. I'm my experience, it all comes down to the details, and the details need to be expressed in a language without possible interpretation problems. Imo, most programming languages are already on that level, so there won't be any revolution here.
Graphical programming might change how many people work. More standardized modules (blogging, authentication, e-commerce, etc) might gradually save us more and more time. But what differentiates will always be needed to be expressed in some way.
Sure, we could go towards SQL-like ways of expressing the desired result instead of how the program should process things, but honestly I think in many scenarios ifs and such statements will be much easier to express yourself in for a long time (ever?).
aserafini|6 years ago
gravypod|6 years ago
I don't think "microservices", the idea of having an interface between two systems/codebases/libraries, will ever die. The more microservice-y bits, seperate apps with RPCs, makes this sort of work much easier.
tyingq|6 years ago
trevyn|6 years ago
v0tary|6 years ago
nrr|6 years ago
I'll further assert that "a monolith" is not the antithesis to "microservices architecture." What this article talks about is mainly the ball of mud versus domain-driven design axis, with a nod or two to challenges with a single deployable unit delivering multiple bounded contexts and challenges with developing mature release engineering practices.
What I do see happening is perhaps a reshuffling of in-vogue-architected microservices applications, in perhaps multiple deployment units, to resemble DDD-alike-architected microservices applications in perhaps fewer deployment units than before. Let me explain.
In my experience, a lot of the problems that the article mentions with respect to applications deployed as singular units are representative of underlying architectural, design, and org chart problems. That goes without even mentioning that the term "microservice" tends not to be understood anywhere near as correctly as it really deserves (or needs) to be.
With respect to that last one though, the org chart problems, I've noticed that reorgs and not understanding the application's bounded contexts tend to go hand-in-hand. If you hammer down what your bounded contexts should be, you'll very likely have a much more natural set of boundaries for teams to work all on the same code together. This requires some bridging and "kum-ba-yah" to happen between the business folks and engineering, but in the grander scheme of things, having everyone on the same page seems to be super worth it.
Hammered-down bounded contexts, in turn, will help you define your aggregate roots. This will then give you the ability to scale out your persistence layer along your aggregate root boundaries and help define how to partition your dataset among several data stores. If you're lucky, this will help you kick the proverbial can down the road in terms of staving off the need to shard those data stores. This addresses the design and architecture problems.
Going further, defining proper service interfaces at the boundaries of your bounded contexts will then allow you, at some point when it's truly necessary for the operations overhead to do so, to deploy your application in more than the one unit. Most importantly, however, it'll help with release engineering even in the singular deployment unit case. A service you consume can define an interface that today is a direct call into another module but tomorrow can be an RPC across the network.
Because you have that modular separation, you can maintain a reasonable degree of development velocity because you've divvied up things to teams split along boundaries that make sense given your problem domain. At that point, you can do things like split up the code among separate repositories so that the compartmentalization is a little more tangible, but decisions here mostly rely on the kind of engineering culture you want to foster and how much effort you want to put into release engineering.
I implore you to read Sam Newman's "Building Microservices" and Martin Fowler's "Patterns of Enterprise Application Architecture." Skip Eric Evans' DDD book; even Evans himself says that Newman's book is a better treatise on DDD than his own.
tutfbhuf|6 years ago
anonsivalley652|6 years ago
cjblomqvist|6 years ago
Graphical programming might change how many people work. More standardized modules (blogging, authentication, e-commerce, etc) might gradually save us more and more time. But what differentiates will always be needed to be expressed in some way.
Sure, we could go towards SQL-like ways of expressing the desired result instead of how the program should process things, but honestly I think in many scenarios ifs and such statements will be much easier to express yourself in for a long time (ever?).