(no title)
macmar | 5 years ago
The big problem is that the learning curve of the entire ecosystem is high, many people do not have the minimum semantic requirements that a distributed computing strategy requires and do not even want to study and understand, and thus generate this discomfort, and some statements erroneous that it is a cannon to kill a fly.
Right now, I am using Apache Kafka as an integration strategy between systems to integrate large factories spread out geographically, there was a high curve for everyone involved (infrastructure, Developers, and Architects) to understand the ecosystem as a whole (Kafka Connect, Kafka Broker, Zookeeper, Streams strategy, MirrorMaker 2) we built the entire integration by moving files in just 45 days due to an emergency imposed by government regulations.
We have almost 35 thousand messages a day (far from being high volume) being transported between different locations, in a stable manner, and with the delivery time (milliseconds) necessary for the demand. And now when we need to integrate a new factory that was bought by the company, we can do this integration in a short time because the entire base (the core) of the integration has been consolidated.
Everything is a matter of strategy and the correct use of the solution.
No comments yet.