Ask HN: How do you version control your microservices?
I currently use git submodules to track the application as a whole, with commit refs for each "green" version of microservice. This "master" repository is then tested with consumer driven contracts for each of the referenced submodules, with subsequent "green" masters for deployment to staging.
This submodule approach requires a lot of discipline for small teams, and on more than one occasion we have encountered the usual submodule concerns. I'm concerned that this will only become more problematic as the team grows.
What are your thoughts for a replacement process?
[+] [-] sagichmal|10 years ago|reply
You should have a build, test, and deploy pipeline (i.e. continuous deployment) which is triggered on any commit to master of any service. The "test" part should include system/integration tests for all services, deployed into a staging environment. If all tests pass, the service that triggered the commit can rolled out to production. Ideally that rollout should happen automatically, should be phased-in, and should be aborted and rolled back if production monitoring detects any problems.
[+] [-] supermatt|10 years ago|reply
Im using Martin Fowlers description (whom I admire greatly): "In short, the microservice architectural style is an approach to developing a single application as a suite of small services" http://martinfowler.com/articles/microservices.html
My problem is knowing which microservice dependencies the application has.
> You should have a build, test, and deploy pipeline (i.e. continuous deployment) which is triggered on any commit to master of any service. The "test" part should include system/integration tests for all services, deployed into a staging environment. If all tests pass, the service that triggered the commit can rolled out to production. Ideally that rollout should happen automatically, should be phased-in, and should be aborted and rolled back if production monitoring detects any problems.
We have this, and it is what builds the manifest (as a git repo containing the submodules).
The problems occur when you have multiple commits to different services. If a build is marked "red" because it fails a "future integration" then it just means that it is failing at that point in time. It may, following a commit of a dependency, become valid. However, it would need to have its build manually retriggered in order to be classified as such.
This becomes cumbersome when you have a not insignificant number of services being commit to on a regular basis.
[+] [-] TomFrost|10 years ago|reply
While it's possible to alleviate some of your pains with versioned APIs to track changes to your data models, you also conflict with data you already have stored in schemaless DBs when those models update.
In a Node or frontend JS stack, I solve that problem with Vers [1]. In any other stack, the idea is fairly simple to replicate: Version your models _inside_ of your code by writing a short diff between it and the previous version every time it changes. Any time you pull data from a DB or accept it via an API, just slip in a call to update it to the latest version. Now your microservice only has to be concerned with the most up-to-date version of this data, and your API endpoints can use the same methods to downgrade any results back to what version that endpoint is using. And frankly that makes versioning your APIs far simpler, as now you move the versioning to the model layer (where all data manipulation really should be) and only need to version the actual API when you change how external services need to interact with it.
And now your other microservices can update to the new schema at their leisure. No more dependency chain driven by your models.
[1] https://github.com/TechnologyAdvice/Vers
[+] [-] supermatt|10 years ago|reply
How do you ensure that a service can be consumed? Or that an event is constructed with the correct type or parameters? Surely interoperability is the key for any SOA?
Vers looks interesting - I'll have a look at that! Thanks!
[+] [-] bognition|10 years ago|reply
[+] [-] davelnewton|10 years ago|reply
[+] [-] grhmc|10 years ago|reply
Also look into the `repo` tool by AOSP for managing many repositories.
At Clarify.io, we have about 60 repositories and 45 services we deploy.
[+] [-] supermatt|10 years ago|reply
How do you define what is considered the current version for each microservice in your application?
[+] [-] janpieterz|10 years ago|reply
As other people have noted as well in here, you should always keep the interface backwards compatible, if needed make a second version of the API or the messages, but never really have to deploy more than a couple of services who really have changed their behavior. The ones just interacting with those services should experience the same interface, being it a couple of versions older or newer.
I'd recommend watching the ucon videos [1] or Udi Dahan's Advanced Distributed Systems design [2] for more in-depth reference material. If you're transforming a team of engineers I can really advise you to join the latter and afterwards order the videos so you can use them with your team as training material! This is less about microservices as them being micro, but more about setting up a distributed service oriented architecture.
[1] https://skillsmatter.com/conferences/6312-mucon#skillscasts
[2] http://www.udidahan.com/training/
[+] [-] supermatt|10 years ago|reply
I suppose the synchronisation of consumer and service compatibility is the biggest concern.
So far, everybody is focusing solely on backwards compatibility, but not future compatibility, which is what the contracts are for.
With regards to the backwards compatibility - breaking changes happen! As long as the service remains functional, and remains compatible with its consumer contracts (which can also change) I shouldn't need to worry about deprecating APIs. Anyone keeping deprecated functionality around in an environment where they control the both the services and the consumers is simply asking for problems.
I can't see how it can be possible to not control the composition of microservices. Surely thats exactly what my CI pipeline is doing? Composing a network of compatible services?
[+] [-] SEJeff|10 years ago|reply
[+] [-] ohitsdom|10 years ago|reply
http://www.troyhunt.com/2014/02/your-api-versioning-is-wrong...
[+] [-] karka91|10 years ago|reply
You can also do parametric jobs in jenkins which could allow combining arbitrary microservices versions
Or just version your APIs and declare explicitly what microservice uses what version of the API to communicate with the other service.
[+] [-] supermatt|10 years ago|reply
The approach is similar to parametric builds in jenkins. The problem is the management of the parameters.
For example:
msA v1.0 & maB v1.0 are individually green, and pass all integration tests. Green build for app.
msB is updated to v1.1. Passes individual tests but fails integration with msA v1.0. Red build for app.
msA is updated to 1.1. Passes individually. Still fails integration with msB v1.1. Red build for app.
However. msA v1.1 is still compatible with msB v1.0, so we could have a green app build with a newer version of msA.
Automating this process is what is becoming cumbersome. We have many more services in dev at any one time.
[+] [-] davismwfl|10 years ago|reply
Micro services should not be directly talking to each other. This couples them in a way that a small API change can be breaking. Instead use a messaging solution so that each service is passing messages and grabbing messages off a queue to do work. This is the easiest way to prevent coupling. It also allows you to version messages if need be and you can deploy a new service to consume those messages. We use JSON, so we can add to a message with no ill effect, and we are careful about removing any required attributes. So we haven't had a need to version messages, but the ability is there if we find it is needed at some point.
Adding messaging does increase complexity in some ways, but once you pass having a handful of services this is the easiest way to manage it.
As a side note. In our solution we have an API that the website and soon to come mobile app tie to. That API interfaces directly with some data schemas but in many places it simply adds messages to a queue for processing.
[+] [-] mattmanser|10 years ago|reply
All that will happen is that it won't be able to process the message, instead of not being able to serve the request.
This thread seems to be full of mad people determined to make simple concepts incredibly complex.
[+] [-] marcosdumay|10 years ago|reply
The answer, of course, is that you version it. Not put in version control, but manually assign version numbers to it.
You try to make it possible to use several versions at the same time, but that's not always possible. If you have to use only one version, make sure to not make any incompatible changes in a single step, first you deprecate old functionality, some time later you remove it. Some times that's impossible, it's natural but it'll hurt anyway, keep those times at a minimum.
Also, make sure you mark your versions differently for features added and incompatible changes, so that developers can express things like "I'll need an API newer enough for implementing feature X, but old enough so that feature Y is still there".
[+] [-] BrandonM|10 years ago|reply
This approach is almost certainly not a robust, long-term solution, but it has served us well for a couple years, allowing us to evolve our APIs quickly without spending any of our early dev effort on internal versioning.
Whether it's appropriate for you comes down to your reason for using microservices in the first place.
[+] [-] mateuszf|10 years ago|reply
[+] [-] supermatt|10 years ago|reply
Microservices must be compatible with each other. We cant simply bring up the latest version of "microservice A", because comsuming "microservice B" may not have been updated to account for API changes (which are enforced by testing contracts). Thats what this master repo is for, to track which microservices work with each other.
Obviously, the master application is dependent on the microservices. Microservices are dependent on specific versions of other microservices, etc. That is the problem I am trying to solve.
[+] [-] alexro|10 years ago|reply
http://blog.factual.com/docker-mesos-marathon-and-the-end-of...
[+] [-] supermatt|10 years ago|reply
[+] [-] cies|10 years ago|reply
Consider Semver for your interfaces. This is really important.
[+] [-] bitsofagreement|10 years ago|reply
Versioning an API is a decision by the API provider to let the consumer deal with forward and backward compatibility issues. I prefer the approach of focusing on the link relations or media type vice URI or some other technique of versioning because it is consistent in the direction (wrt Hypermedia-based APIs) of the link relations as the point-of-coupling for your API, which makes managing and reasoning about changes to your API less complicated.
Whenever possible, hypermedia-based media type designers should use the technique of extending to make modifications to a media type design. Extending a media type design means supporting compatibility. In other words, changes in the media type can be accomplished without causing crashes or misbehavior in existing client or server implementations. There are two forms of compatibility to consider when extending a media type: forward and backward.
Forward-compatible design changes are ones that are safe to add to the media type without adversely affecting previously existing implementations. Backward-compatible changes are ones that are safe to add to the media type design without adversely affecting future implementations.
In order to support both forward and backward compatibility, there are some general guidelines that should be followed when making changes to media type designs. 1) Existing design elements cannot be removed. 2) The meaning or processing of existing elements cannot be changed. 3) New design elements must be treated as optional.
In short favor extending the media type or link relation and focus on compatibility. Versioning a media type or link relation is essentially creating a new variation on the original, a new media type. Versioning a media type means making changes to the media type that will likely cause existing implementations of the original media type to “break” or misbehave in some significant way. Designers should only resort to versioning when there is no possible way to extend the media type design in order to achieve the required feature or functionality goals. Versioning should be seen as a last resort.
Any change to the design of a media type that does not meet the requirements previously described in are indications that a new version of the media type is needed. Examples of these changes are: 1) A change that alters the meaning or functionality of an existing feature or element. 2) A change that causes an existing element to disappear or become disallowed. 3) A change that converts an optional element into a required element.
While versioning a media type should be seen as a last resort, there are times when it is necessary. The following guidelines can help when creating a new version of a media type. 1) It should be easy to identify new versions of a media type. a) application/vnd.custom+xml application/vnd.custom-v2+xml b) application/custom+JSON;version=1 application/custom+JSON;version=2 c) * REQUEST * PUT /users/1 HTTP/1.1 Host: www.example.org Content-Type: application/vnd.custom+xml Length:xxx Version: 2 … 2) Implementations should reject unsupported versions.
Hope this helps!