top | item 12133670

The End of Microservices

258 points| reimertz | 9 years ago |lightstep.com | reply

145 comments

order
[+] iamleppert|9 years ago|reply
I'll tell you the real reason behind microservices: developer fiefdoms. "Faux-Specialization". It allows developers to feel like they have control over certain pieces of infrastructure and run the gambit on their strategy for getting ever more increasing pieces of the pie.

It has nothing to do with building reliable software. You could just as easily build and deploy a single networked application (so called "monolith"), that is composed of many different libraries that have well defined interfaces which can be tested in isolation. In fact, that's how most non-web software is still written and done.

The real reason is that by having these microservices, it allows single developers or teams to own or control parts of the codebase and enforce their control via separate repo's, and when speaking runtime, via authentication: Sally can't see the code to Joe's service and Joe can't make requests to Sally's production instance of her service that gives a guess at how long the car has to arrive to pick poor end user Bob up.

I've seen this same thing play out countless times at large tech companies and startups alike. It has nothing to do with building scalable, or more maintainable, or more cleverly designed applications. If anything, it adds more complexity because now we need to do all kinds of data marshaling, error checking, monitoring, have more infrastructure for something that should have been done in shared memory/in-process to begin with. Not to mention all the issues and headaches caused by fan out of tons of API requests, complicated caching scenarios, etc. I've seen the horror of microservices architecture where no one person is responsible for the actual app, only their "service".

There are a few exceptions where its useful to scale out parts of a distributed application, but in 99% of my experience the services aren't a real distributed system anyway and are vaguely organized by function, developer interest, and yes, control.

[+] r2dnb|9 years ago|reply
>In fact, that's how most non-web software is still written and done.

I can even add that we only do web software, but we do them exactly that way.

>If anything, it adds more complexity because now we need to do all kinds of data marshaling, error checking, monitoring, have more infrastructure for something that should have been done in shared memory/in-process to begin with.

I couldn't agree more. Martin Fowler warned us a long time ago : "The first rule of distributed objects : don't distribute them".

>There are a few exceptions where its useful to scale out parts of a distributed application

Yes, and very very very few. As I always say : microservices are not an architecture, they are an optimization.

[+] hibikir|9 years ago|reply
You say it as if it's a bad thing... Compared to the alternative, fiefdoms really are a wonderful thing!

I once work for a giant fortune 500 corporation. Our department had a good 300 developers, and had been writing code for years. They had built the whose system using the classic, box-box-cylinder architecture. There were hundreds of little top tier 'services', but in practice, they all shared the same database, had to be deployed pretty much at the same time, and version upgrades had to go in lockstep. Every so often, database changes were necessary, and the world would grind to a halt for months to make the system still work after the migration: It was awful.

On top of this, having everyone using one stack really meant that a council of elders got to make tech decisions, and everyone else just did what they were told. This built a giant hierarchy. People near the top would never leave, because nowhere else would give them that much power. Talented developers would take years to gain influence, so they often just left. What remained was developers with no ambitions other than getting a paycheck... it was horrible.

The alternative was to let people maintain their own stacks, as long as they provided interfaces that other people could call. By limiting how much code could talk to a database, you didn't need negotiations to change something: Teams made any change they wanted as long as they remained backwards compatible, and then had to lobby users to upgrade to newest API versions if they wanted to remove the backwards compatibility mess. It was awesome in comparison.

A gigantic place won't have those problems, because they can invest money on making whatever tech decisions they made tenable: PHP is slow? Let's build a new runtime and compiler, says Facebook! If you are tiny, you don't need any of this BS, because if your team of 8 engineers can't agree, your company will fail regardless. But when you have 200 engineers, it's giving people more control over a piece of the pie or bleeding talent.

The one thing you still need to do is make sure teams have the right size, and people have enough accountability, that the product still works. You also need silly amounts of data marshaling and error checking compared to the monolith. But the terror of a company that can't hire engineers because the only way to have any control over your daily life is to have been there for 5 years is just hard to compare. When people say they don't want to work big corporate gigs, what they really mean is that monoliths become soul sucking.

So yes, I give thumbs up to fiefdoms, just like I'd rather have inefficiency in a republic vs a theoretically efficient dictatorship.

[+] jordwest|9 years ago|reply
Any organization that designs a system ... will inevitably produce a design whose structure is a copy of the organization's communication structure

— M. Conway

[+] DanielBMarkham|9 years ago|reply
I think you've missed the point in several different ways here.

First, there's a difference between "I've seen it done like X" and "When done well, it's done like Y"

Too often we play this game where we talk, teach, and apply Y, but then in the real world it gets done like X. Turns out that X sucks a lot, so then we throw away Y.

Microservices may be done poorly in most real world applications. In fact it would surprise me if they weren't.

This article doesn't help much. Microservices are not just another version of SOA. It doesn't work that way. In SOA, you start with a general category of service, say "logging". You write it simple, yet broad, and it's supposed to handle all folks that need logging. In microservices, you're doing one tiny little thing, like "make a record of the errors of this process available on the web". The vast majority of times, if you define what you're doing narrowly enough? The O/S already does it. Whereas if you start broad? You're writing code.

Then you slowly and methodically expand on that mission a little bit at a time, refactoring your microservices as you go. It's both a different way of structuring apps from monolithic days and a different way of looking at constructing and maintaining apps. If you think of it as the same blob of binary bits broken into smaller pieces, you've missed it. Likewise if you think of it in terms of "services". The "micro" is the key word here, not the "services" part.

This actually requires a much heavier interaction between developers, not setting up fiefdoms. If done correctly, it pushes larger groups of developers across teams to work more tightly together. If it's doing something else? You're doing it wrong.

[+] dasil003|9 years ago|reply
If you read my top-level comment you'll see that I agree with you 90%, but careful not to throw the baby out with the bathwater.

There are legitimate reasons why having those fiefdoms is beneficial despite the overhead. Especially in a SaaS world where you are not distributing binaries. Being able to use different languages for different purposes, and to deploy them to different hardware can be quite a bit more efficient in terms of hardware costs at scale. Also there is something to be said for "you write it, you run it" as it makes developers more careful and decreases finger-pointing. Again, don't take this as a rebuttal, the overhead is potentially a lot, but if clean interfaces can be determined and makes sense from a high-level business perspective then there can be a net gain.

[+] partycoder|9 years ago|reply
When you have a large service, you can end up with a lot of coupling. Meaning that it is hard to have teams that specialize in specific parts of the application, you need a huge team that knows everything about everything and that doesn't scale as your code and company grows. And you can see the effects of that in reduced productivity and technical debt beyond belief.

Even if you hire high-end talent, they will be forced to worship people that have the knowledge about how all the bloated jenga-tower mess works.

And that's only the people problem, let's not even discuss how you build, deploy, test things in a reliable way without taking days to produce a working build.

That, and the fact that your quality of life will suck, since you won't be in control of such mess.

With microservices you have one service that does a limited set of things, that is easy to monitor, maintain, test, deploy and even entirely replace if necessary.

[+] dmreedy|9 years ago|reply
I agree about fiefdoms, and want to address a particular aspect of that pattern, somewhat complementary to yours, that I've seen at least in the (very large) organization that I'm a member of.

Specifically, like a microservice allows encapsulation of functionality, it also allows encapsulation of blame[1]. In a monolith (from whence the org I'm in came), a build failure or test regression could be caused by any number of failures across any number of horizontal bands in the organization. Oh, the build automation crashed because the build team updated to the latest version of Java but the build didn't. Oh, the UI Filters stopped working because the API team changed something without deprecating. It meant that development, in spite of agile efforts, still had a tick-tock cadence, where breaks halted work and tracking down the responsible parties and getting things fixed might take time (a lot of areas with deep specialties required to understand why something might be wrong). This also meant, because of the way the organization was structured and the way the build was structured, that "pressure" was directed along very hierarchical routes. Managers saw bugs from customers and pressured testers and dev-ops people who maintained automation to investigate causes and transfer responsiblity to developers who might be able to actually fix the problems.

As we've been decomposing into microservices, and likewise aligning along feature teams the blame gets allocated at API/service interfaces[2] instead of top-down. Since the build, deployment, uptime, and algorithmic functionality of each service is theoretically the domain of a single team, the blame-flow is more distributed and simple. An algorithmic bug, a build bug, and an availability issue are all addressed the same way: report the issue to the team responsible for that service, and let them work it out.

I'm not advocating that either way is better. There were nice aspects about a single-location debug tree in the monolith. I've seen teams that have become experts at deflecting blame and thus slow down the entire broader effort. And I know I'm possibly conflating two paradigms inappropriately (Feature Teams and Microservices). Just a notable pattern, to my eyes.

---

[1] I don't necessarily mean 'blame' here in a pejorative sense. Perhaps 'responsibility' would be a more neutral term.

[2] Steve Yegge describes to this being a top priority during Amazon's service/platform decomposition.

[+] pjlegato|9 years ago|reply
It happens for another reason too: everyone likes to flatter themselves that they're top-tier big shots, that they're "web scale," that they have the same scaling problems and uptime problems that Google and Facebook have, and thus they think they need to adopt the same architecture patterns that Google and Facebook use.

Someone builds a monolith, it gets some traction, and suddenly the server bill gets into five, six figures a month. Management have never been management before, and have never paid anything more than a few hundred dollars a month for servers, so their new cloud bill is mind blowing to them. They're spending as much money as a house costs every month on renting servers. They imagine they're hemmoraging cash right and left, they freak out and order engineering to rearchitect everything around a Service Oriented Architecture, because clearly it's time for the big boy toys. Google does it that way, so clearly that's what one does when one is at scale, when one is successful and has lots of paying customers, right?

99.9% would be better off financially and technically just rewriting a V2 of the monolith using lessons learned from V1, but that's perceived as outmoded, old-world thinking, because it's not what Google does.

[+] lobster_johnson|9 years ago|reply
I see several people criticize microservices here. We've been doing it for about 6 years and are extremely happy with it.

A core principle which a lot of people and articles ignore, though, is reusability. I bring this up on HN every time there's a discussion about microservices, yet I've never seen any discussion about it.

Essentially, you build out the backend to act as a library for your front end. So we have login, storage, analytics, reporting, logging, data integrations, various forms of messaging, business-structural stuff, etc. etc. all bundled up as separate services. The front ends just use these services to build a coherent product. The front end is the monolith: The microservices are the cloud.

For example, let's say I wanted to create a new product called "Hacker News". I'd use our storage service to store links and comments. I'd use the login service to let users log in. I'd use our messaging service to send notifications about things like verifying your email or send password resets. I'd use our analytics backend to emit events for reporting. And so on. I could easily build the whole thing without writing a single line of backend code, and without setting up a new cluster, because every backend service has been designed with multitenancy from the start.

This ability to piggyback on a platform of services where I think the real utility of microservices lies. Everything else — fine-grained scalability, smaller surface for tests, language-independence, swappable implementations, etc. etc. — are secondary to that.

[+] lmm|9 years ago|reply
Separate pieces of functionality into small reusable libraries? Great. Enforce separation of each library's internals from each other one? Great. Ensure each database is owned by one and only one service? Great.

Invoke those services via RPC-over-HTTP? Why???

[+] pjmlp|9 years ago|reply
I have been doing it since the early 90's.

It is called unit, module, package, library.

No need to put some network layer, with its own set of problems, between method/function calls.

[+] carterehsmith|9 years ago|reply
You are not getting more reusability from a bunch of microservices, compared to one big service, I don't see where you got that?
[+] collyw|9 years ago|reply
Sounds kind of like what I would do, except I would use a database table / controller for each of the things you consider a microservice.
[+] dasil003|9 years ago|reply
Equating "Microservices" with "Information Superhighway" really shows the tech bubble that this article is written in. "Information Superhighway" was a vacuous but mainstream term used by politicians and public figures. "Microservices" is a tech hype train led by expensive consultants and pickaxe companies thriving off the current tech boom.

Don't get me wrong, a service-oriented architecture is the only thing that scales to large companies. Once you get to dozens of engineers and millions of lines of code you will inevitably need to have an SOA because Conway's law. Also, there is a learning curve to building microservices which improved tooling really helps with.

However the thing that really grates at me is how these articles say things like:

> Services are now an everyday, every-developer way of thinking

With nary a mention of the overhead. There is no way around it, distributed systems have an irreducible complexity no matter how good your tooling and boilerplate is. You have to put in extra work to decouple everything and handle failure in a way that actually reap the benefits of the distributed system. And in the end, what these articles always gloss over is the interface between these systems. If you can easily define an interface between systems that stays relatively stable as the service evolves, then congratulations, you have a good candidate for a service with minimal overhead. But for most applications, those interfaces are shifting all the time, and there is no better tooling than multiple logical services running within one binary and build system where integration testing and validation is cheap. This is a real fucking problem people, it's not going to go away because there's a couple billion dollars worth of venture-backed startups ready to blow their cash on you in the vain and most likely misplaced hope that they are actually going to have to scale to dozens of engineers. Premature scalability is one of the worst siren songs for young engineers and we're seeing it in spades right now.

[+] dexwiz|9 years ago|reply
It's all a balancing act. The two main contenders are developers and scalability.

It makes no sense for a single team to run 2000 microservices that come together into a single app. The amount of overhead for managing so many interfaces is insane.

At the same time, it's hard to justify 2000 developers working on a single binary. You end up with entire teams dedicated to managing and deploying. Companies do it (Google), but it's not without costs.

If every microservice runs with the same specs (container size/# of containers), then there is nothing gained from scalability. If anything, you're probably wasting a large amount of resources if your containers cannot shrink any more.

At the same time, if you are deploying thousands of copies of of a single binary, when most of the resources go to 1% of the code, then you're wasting resources with needless copies.

The (micro)services fad is definitely brought on by the recent rise of virtualization. It's probably a bit overboard.

[+] sp527|9 years ago|reply
Speaking as a young engineer, I can tell you many are already looking past microservices towards things like AWS Lambda, AMQs, and BaaS, which make a great deal more sense. Why? Because it helps reduce dev effort down to purely the logic you'd have to write no matter what, with better guarantees about reliability and scalability, and less maintenance. I hesitate to say 'serverless' because that still feels somewhat out of reach, but that's the direction things are trending towards.

I also get the sense that a lot of purists moulded in the ways of yore are alarmed at the waning relevance of their skillset. This to me seems like a bigger problem than the evolution of software paradigms.

[+] spriggan3|9 years ago|reply
> Premature scalability is one of the worst siren songs for young engineers and we're seeing it in spades right now.

This. What happened to "make it work then make it scale" ?

[+] pm90|9 years ago|reply
This is a great point. I don't think the hype is due to a fad alone. There are a lot of good reason why microservices are much easier to create and deploy today: containerization and the associated technology around it. This makes microservices a very natural paradigm for app deployment.

Of course, its not a holy grail. But we see that kind of thing over and over again. I think its OK; it gets developers genuinely excited to try new technology. Its only when this hype influences dangerous decisions that I'm worried about.

[+] djspoons|9 years ago|reply
I wrote the post with pretty rose-colored glasses on. :)

I totally agree that microservices can be a form of premature optimization, in particular because of the cost with today's tooling. But I think there's hope that lot of those costs will go down (both in terms of dev time and infrastructure) with things like AWS Lambda, etc.

Anyway, if devs think a little more about the interfaces, I think that will be a good thing.

[+] msoad|9 years ago|reply
One thing I don't like about SOA is that an error does not have a full stack trace. I know Zipkin exists but it's nowhere close to what we had in a monolithic app where you could just put a breakpoint and trace back to where exactly an error is thrown.

If we can find a way of running a giant monolithic app in development and production environment without vertically scaling our machines, I would rather have that.

Every bug I'm working on is like a mystery that I have to hop to many services to find what's going.

I also think HTTP is the worst protocol for apps to talk to each other.

[+] jeremiep|9 years ago|reply
How is that different from debugging a multi-threaded or networked application? In both cases barely entering the debugger changes the behavior of the system. Heck even an asynchronous program running on a single thread doesn't have full stack traces.

Having worked with both monolithic and SOA apps, the later yields radically simpler architectures and from there you spend a lot less time debugging.

The older I grow as a programmer, the more I dislike monoliths. I'd rather have simpler programs where entire classes of bugs are guaranteed never to happen. I have yet to see a single monolithic app without serious technical and conceptual debt. The worst thing is that back when I thought monoliths were great I had absolutely no idea things could be so much simpler.

Also, HTTP/1.1 is a fantastic protocol. Its dead-simple to implement, debug, cache, send through proxies who won't understand your custom headers or body format and whatnot. It even gives you an extra layer of routing on top of TCP/IP! This is exactly what you want to build systems with.

[+] uluyol|9 years ago|reply
What's wrong with HTTP? If you're going to build an RPC protocol, you are going to need encryption, stream multiplexing, a way to distinguish RPC methods, support for extensions (e.g. auth), and compression.

Add the fact that high performance HTTP servers and clients exist in most languages, and building an RPC protocol on top of HTTP sounds pretty attractive. No wonder gRPC did exactly that.

[+] RhodesianHunter|9 years ago|reply
Any service taking requests from a client generates a UUID and passes it through to any additional services it calls. Most microservice frameworks have this functionality built in.

You log to a central store such as an ELK stack or any of the great third party offerings. When you need to see the entire stack trace you search by the id.

[+] themihai|9 years ago|reply
Depending on your requirements micro-services may be a good or a bad solution. If the communication/protocol becomes too chatty that may be a sign you are doing it wrong.
[+] djspoons|9 years ago|reply
Tracing and debugging-in-developer are solving different problems, though: tracing is there to help you understand issues that you can't reproduce in development. And for the record, Zipkin is only one tracing tool and is really aimed at helping you address latency issues, not errors.
[+] sinzone|9 years ago|reply
When this year I came back from Dockercon I immediately wanted to write something very similar to what this article describes. I wanted to imagine a world where Containers and Microservices were part of the past already, and so I wrote "DockerCon 2020" [1] and how it will look like.

[1] https://medium.com/@sinzone/dockercon-2020-a513ed04eefb#.rbz...

[+] danblick|9 years ago|reply
I'm sorry -- I think the author completely misses the point about why microservices were controversial at all?

Distributed systems are not the same as centralized ones, and you cannot paper over the differences between the two. It is wrong to think that distributed microservices will completely replace centralized services in some future paradise. The difference is not a tech fad; it's more like a law of nature. Distributed systems should plan for network failures, yet nobody wants to get a "503" from their CPU.

[+] AstralStorm|9 years ago|reply
You will get 503 from your kernel if you try hard enough. Handling this is technically required in any application, but often ignored, especially when allocating memory. (Because there out is a deferred failure.)
[+] ianamartin|9 years ago|reply
I'm not totally convinced that allowing developers to build faster is really all that great of an idea. At least not the sacred ideal that seems to be accepted without any question.

Most of what I see when people are moving fast is building things as fast as they can think of them based on the first idea that comes to mind that sounds like it might get things done.

But the reality is that the first way that you think of implementing something isn't always the best. It's often just about the worst. Giving people the ability to take any whim of a design and run with it all the way to production isn't the best thing overall for software quality.

Perhaps I'm alone here, but I'd like for developers to slow down and put some thought into what they are building, and how it's supposed to work, and if it's going to be able to do what it needs to do. I see a lot of "close enough" in my line of work.

I know it's different in a startup, where testing the idea now is important, and I'm not slamming that. But the vast majority of developers don't work in startups where getting a product to market before a competitor is the difference between making billions and going home broke.

We temper our desire for perfection by reminding ourselves that good enough is okay for now. I'd like us to temper our desire for speed by remembering that there is such a thing as soon enough.

[+] nkassis|9 years ago|reply
What you are describing is definitively an issue I face every day but I don't think speed of development is the issue. Higher productivity is good but often what is called higher productivity is just due to carelessly ignoring design and planning in favor of get things done now mentality.

Often the worse offenders are rewarded for being highly productive and the people who end up having to cleanup, refactor and get things actually working are not acknowledge for the vital effort they put in. In my view this is the result of bad management incentives and failure to properly asses results and contributions.

[+] TickleSteve|9 years ago|reply
"Microservices" are a new name for a very old concept.

This is just low-coupling, high-cohesion by another name.

Small, composable, decoupled, highly cohesive components are what "good" software has been about for decades, but it now has a new name in the server s/w world; "Microservices".

Only the name is new & hyped. The concepts have been true forever.

[+] alayne|9 years ago|reply
There is software and concepts around managing microservices that didn't exist before, and so on. Nothing can be new when you use such a vague way of comparing things. There were electric cars in the 1800s, why talk about Tesla?
[+] ris|9 years ago|reply
Another good article on this subject: https://m.signalvnoise.com/the-majestic-monolith-29166d02222...

My experience with microservices has been pretty painful. My analogy of microservices is it's a bit like building a car factory on two sides of the Danube. And there's no phone line in between. You've got a factory building cars up to a certain point, but then they have to stop work and pack it all up onto a barge, figure out how to fit everything on the barge and send it away across the river for the other side to spend time unpacking & figure out how it all fits together...

As a django guy, I've tended to follow the pattern of spending time making my models nice and rich, with useful traits which will be helpful at all levels of the app down to the views. To then have to pack this all up and deliver some "dumb" json to the other side feels like a massive waste of time. With microservices I spend my life marshalling data around.

And the number of times I've realized I've just spent an hour discussing the philosophical implications of how a particular bit of the rest interface should be designed, all for an interface that we're the only consumers of and doesn't need to exist in the first place... I've found depressing.

The ramifications on testing are a further story. Do you have to test all kinds of ways you can receive your rest requests malformed if you're the only consumer and know exactly how you're going to use it? Good use of developer time?

[+] rbosinger|9 years ago|reply
Ok. I'll be the guy to bring up the Elixir/Erlang ideology that has gained such popularity here. Although I don't have a ton of experience with it yet it seems like the possibility of having the idea of "services" built into the language/framework design is very realistic. That's exciting for me. Although true SOA can be a mix of many technologies I personally find that scary. What happens when your whole platform ends up as a web of services on different technologies and you lose various talent? Now you have to recruit all kinds of different expertise or hope that certain services keep ticking without that knowledge in house.
[+] drdaeman|9 years ago|reply
I don't know about the microservices and stuff, but I've got one cumbersome monolith to deal with, and it had started to rot (you know, rely on outdated dependencies that one can't upgrade without significant effort etc etc). Splitting it to a few isolated different systems looked like the only sane choice.

Luckily, I've had to redo one logical part of the monolith anyway, because of some changing business requirements. So I made it a separate independent project, that had used all the modern currently-stable tech (rather than few-years-old one + accumulated baggage of the past architectural mistakes) and it all went quite nicely.

It took me 1.5 weeks (quite busy ones, but meh) to extract all the old code pieces that I've needed, clean them up, update with the new logic, and get the freshly-minted project ready, tested, packaged up and running in production. The only thing I've lost is ability to run cross-db queries (we just have a folder of read-only SQL query snippets to fetch some useful statistics once a week or so), because I put the data in a separate database. I hope, postgres_fdw would work when I'll need it.

Would I've tried to update the whole monolith, it would've taken me months.

So, the next time I'll work on some large-enough part, I'll probably extract it into a fresh project as well. As I see it, I'll end up with a remains of legacy project surrounded by a few small(er) monoliths. And then the legacy piece would be small enough to get cleaned up.

(I don't know about micro- scale and putting every tiny thing into a different microservice, though. I have an impression it requires a lot of extra unwanted cognitive load to manage so seems like an overkill to me.)

So, my point is: software (code) rots over time. Multiple projects (services) allow to update pieces at different pace, which is less stressful on developers.

[+] tomc1985|9 years ago|reply
It is weird to read people write about microservices (or some other tech fad) as if it is this otherworldly thing that requires instruction and training. So many words dedicated to describing the supposedly bad old days!

All this stuff is just another aspect in the life of a practitioner of computing. A proper expert should see these things not as a fad, but as a collection of techniques that can be added or subtracted to at will depending on the prevailing need. It's silly to declare any of these fads dead or alive, they're just simply techniques that ...people... have bundled together under a common label

[+] collyw|9 years ago|reply
I think after a certain amount of years you start realizing every new fad is just a rehash of some old ideas. (You ought to notice this as you get more experienced).
[+] xxs|9 years ago|reply
...but you can write (and sell) books about, talk at conferences too
[+] mdgrech23|9 years ago|reply
The title is link bait and does reflect the arguments put fourth by the author.
[+] romanovcode|9 years ago|reply
Not to mention it's a plug to the product he's selling.
[+] stevehiehn|9 years ago|reply
Maybe a more appropriate title could be 'Microservices are the Norm'
[+] justinhj|9 years ago|reply
I joined a company where the proof of concept had, inevitably, become the monolithic application we would work on for the next two years. Everyone on the team agreed that the monolith would be a liability so we started to share knowledge on microservices and plan for that in the future. To do this we stuck to a handful of rules. Systems should do one thing and do it well, with a well defined api and protocol. Whilst all the data may be in the same redis and MySQL instance we made the data store configurable as well its location, and made sure systems did not read or write each other's data. We wrote generic systems as libraries with no dependencies on the rest of the monolith. The results of this work, which was a lot of refactoring, is that when we decided to farm some work out to a contractor we could do so as a microservice. They worked in their favourite language with their own tools, implementing the api we wanted over a specified protocol. At any point it would be possible to split out services to scale them horizontally, but we didn't have to until we need to, because every split increases the operational costs and complexities a little.
[+] kevinr|9 years ago|reply
I feel like maybe the Big Idea of microservices is that web APIs provide better isolation guarantees than library calls, and now with the move to SaaS either the scale of our applications is large enough or (more likely) with virtualization the intra-server network latency is small enough that we can afford the extra overhead of web APIs relative to library calls in exchange for that isolation.
[+] gedrap|9 years ago|reply
When it comes to services, I think it's worth to talk about one common use case which comes with different motivations and problems: adding new features to old, probably poorly engineered, monolithic application. Features that are not tiny yet-another-crud-on-a-new-table but completely different than most of the existing functionality.

In this case, they really pay off if they are separated well which sometimes is hard. But executed well, it allows to keep moving quickly as the requirements grow. Of course, this is not an excuse to avoid refactoring monolithic application, improving testing, etc.

I've worked in such a setting in companies, and both times it was a win and helped to build important to the business features really quickly and reliably.

But is it worth to write an application from scratch in a service oriented architecture? Probably not, most of the time. Especially if 'product to market time', 'MVP' and similar concepts are very important for you.

[+] k__|9 years ago|reply
I had the feeling that microservices would add too much of complexity, but with FaaS this is canceled out by the fact that almost all server management complexity is handled by a different company.
[+] tuananh|9 years ago|reply
They have the word "microservice" on their homepage :D
[+] EGreg|9 years ago|reply
John Titor is back!!