top | item 46258499

(no title)

wowohwow | 2 months ago

I think your point while valid, it is probably a lot more nuanced. From the post it's more akin to an Amazon shared build and deployment system than "every library update needs to redeploy every time scenario".

It's likely there's a single source of truth where you pull libraries or shared resources from, when team A wants to update the pointer to library-latest to 2.0 but the current reference of library-latest is still 1.0, everyone needs to migrate off of it otherwise things will break due to backwards compatibility or whatever.

Likewise, if there's a -need- to remove a version for a vulnerability or what have you, then everyone needs to redeploy, sure, but the centralized benefit of this likely outweighs the security cost and complexity of tracking the patching and deployment process for each and every service.

I would say those systems -are- and likely would be classified as micro services but from a cost and ease perspective operate within a shared services environment. I don't think it's fair to consider this style of design decision as a distributed monolith.

By that level of logic, having a singular business entity vs 140 individual business entities for each service would mean it's a distributed monolith.

discuss

order

mjr00|2 months ago

> It's likely there's a single source of truth for where you pull libraries or shared resources from, when team A wants to update the pointer to library-latest to 2.0 but the current reference of library-latest is still 1.0, everyone needs to migrate off of it otherwise things will break due to backwards compatibility or whatever.

No, this misses one of the biggest benefits of services; you explicitly don't need everyone to upgrade library-latest to 2.0 at the same time. If you do find yourself in a situation where you can't upgrade a core library like e.g. SQLAlchemy or Spring, or the underlying Python/Java/Go/etc runtime, without requiring updates to every service, you are back in the realm of a distributed monolith.

rbranson|2 months ago

This is explicitly called out in the blog post in the trade-offs section.

I was one of the engineers who helped make the decisions around this migration. There is no one size fits all. We believed in that thinking originally, but after observing how things played out, decided to make different trade-offs.

wowohwow|2 months ago

I disagree. Both can be true at the same time. A good design should not point to library-latest in a production setting, it should point to a stable known good version via direct reference, i.e library-1.0.0-stable.

However, the world we live in, people choose pointing to latest, to avoid manual work and trust other teams did the right diligence when updating to the latest version.

You can point to a stable version in the model I described and still be distributed and a micro service, while depending on a shared service or repository.

mekoka|2 months ago

You're both right, but talking past each other. You're right that shared dependencies create a problem, but it can be the problem without semantically redefining the services themselves as a shared monolith. Imagine someone came to you with a similar problem and you concluded "distributed monolith", which may lead them to believe that their services should be merged into a single monolith. What if they then told you that it's going to be tough because these were truly separate apps, but that used the same OS wide Python install, one ran on Django/Postgres, another on Flask/SQLite, and another was on Fastapi/Mongo, but they all relied on some of the same underlying libs that are frequently updated. The more accurate finger should point to bad dependency management and you'd tell them about virtualenv or docker.

deaddodo|2 months ago

The dependencies they're likely referring to aren't core libraries, they're shared interfaces. If you're using protobufs, for instance, and you share the interfaces in a repo. Updating Service A's interface(s) necessitates all services dependent on communicating with it to be updated as well (whether you utilize those changes or not). Generally for larger systems, but smaller/scrappier teams, a true dependency management tree for something like this is out of scope so they just redeploy everything in a domain.

aezart|2 months ago

We had this problem, 119 services that all got their dependencies from a shared domain. Individual services had to depend on the exact version of libraries provided by the domain. It made updates essentially impossible.

Eventually we were forced by a licensing change to move to containers, which fixed that issue but substantially increased our research usage, we went from 16 GB of RAM for the entire domain to 1.5 GB per service, with similar increases on CPU.

philwelch|2 months ago

> If you do find yourself in a situation where you can't upgrade a core library like e.g. SQLAlchemy or Spring, or the underlying Python/Java/Go/etc runtime, without requiring updates to every service, you are back in the realm of a distributed monolith.

Show me a language runtime or core library that will never have a CVE. Otherwise, by your definition, microservices don’t exist and all service oriented architectures are distributed monoliths.

lelanthran|2 months ago

Right, but theres a cost to having to support 12 different versions of a library in your system.

Its a tradeoff

3rodents|2 months ago

Yes, you’re describing a distributed monolith. Microservices are independent, with nothing shared. They define a public interface and that’s it, that’s the entire exposed surface area. You will need to do major version bumps sometimes, when there are backwards incompatible changes to make, but these are rare.

The logical problem you’re running into is exactly why microservices are such a bad idea for most businesses. How many businesses can have entirely independent system components?

Almost all “microservice” systems in production are distributed monoliths. Real microservices are incredibly rare.

A mental model for true microservices is something akin to depending on the APIs of Netflix, Hulu, HBO Max and YouTube. They’ll have their own data models, their own versioning cycles and all that you consume is the public interface.

makeitdouble|2 months ago

I'm trying to understand what you see as a really independent service with nothing shared.

For instance if company A used one of the GCP logging stack, and company B does the same. GCP updates it's profuct in a way that strongly encourages upgrading within a specific time frame (e.g. price will drastically increase otherwise), so A and B do it mostly at the same time for the same reason.

Are A and B truly independent under your vision ? or are they a company-spanning monolith ?

wowohwow|2 months ago

This type of elitist mentality is such a problem and such a drain for software development. "Real micro services are incredibly rare". I'll repeat myself from my other post, by this level of logic nothing is a micro service.

Do you depend on a cloud provider? Not a microservice. Do you depend on an ISP for Internet? Not a microservice. Depend on humans to do something? Not a microservice.

Textbook definitions and reality rarely coincide, rather than taking such a fundamentalist approach that leads nowhere, recognize that for all intents and purposes, what I described is a microservice, not a distributed monolith.