top | item 28045652

(no title)

evanrich | 4 years ago

Like others have said, it is just one tool in the tool box.

We used Kafka for event-driven micro services quite a bit at Uber. I lead the team that owned schemas on Kafka there for a while. We just did not accept breaking schema changes within the topic. Same as you would expect from any other public-facing API. We also didnt allow multiplexing the topic with multiple schemas. This wasn’t just because it made my life easier. A large portion of the topics we had went on to become analytical tables in Hive. Breaking changes would break those tables. If you absolutely have to break the schema, make a new topic with a new consumer and phase out the old. This puts a lot of onus on the producer, so we tried to make tools to help. We had a central schema registry with the topics the schemas paired to that showed producers who their consumers were, so if breaking changes absolutely had to happen, they knew who to talk to. In practice though, we never got much pushback on the no-breaking changes rule.

DLQ practices were decided by teams based on need, too many things there to consider to make blanket rules. When in your code did it fail? Is this consumer idempotent? Have we consumed a more recent event that would have over-written this event? Are you paying for some API that your auto-retry churning away in your DLQ is going to cost you a ton of money? Sometimes you may not even want a DLQ, you want a poison pill. That lets you assess what is happening immediately and not have to worry about replays at all.

I hope one of the books you are talking about is Designing Data Intensive Applications, because it is really fantastic. I joke that it is frustrating that so much of what I learned over years on the data team could be written so succinctly in a book.

discuss

order

sideway|4 years ago

Thanks for your detailed answer, really appreciate it.

Two follow up questions if you don't mind me asking, even though I understand you were not on the publishing side:

1. Do you know if changes in the org structure (e.g. when uber was growing fast and - I guess - new teams/product were created and existing teams/products were split) had significant effect on the schemas that had been published since then? For example, when a service is split into two and the dataset of the original service is now distributed, what pattern have you seen working sufficiently well for not breaking everyone downstream?

2. Did you have strong guidelines on how to structure events? Were they entity-based with each message carrying a snapshot of the state of the entities or action-based describing the business logic that occurred? Maybe both?

And yes, one of the books I'm talking about is indeed Designing Data Intensive Applications and I fully agree with you that it's a fantastic piece of work.

evanrich|4 years ago

For 1, no example really comes to mind, but i guess there could be cases where a service went from publishing an event with all of its related data, then split into a service where that becomes more expensive to do (like that data is no longer in memory its behind the api of the old service). In some cases you can have very simple services that consume a message, make a few calls to services or databases to hydrate it with more information, then produce that message to another topic that the original consumers could switch to. More commonly though if the data model is making a drastic change where the database is being split and owned by two new services, you will have to get consumers in on the change to make sure everyone knows the semantics of the new changes.

For 2, it completely depends on the source of the trigger. The first event in a chain probably only has enough information to know that it should produce an event, usually as quickly possible, so no additional db or api fetches. So you might get something in the driver status topic that contains {driver_uuid, new_status, old_status}, then based on what downstream consumers may want to do in response to that event, you may need more info, so you may get more entity information in derived topics. Even pure-entity-based messages would have needed a trigger, so in our topics that tail databases, you may have the full row as a message along with the action that occurred like {op: insert, msg: {entity data… }}.

mk89|4 years ago

I am not the author of the original message, however, I also recommend "Building microservices 2nd edition" if you are trying to answer such questions

kieranmaine|4 years ago

When you say you used a central schema registry, did you have a single repo containing all topics and schemas?

sumtechguy|4 years ago

kafka has a 'schema registry' service. Typically they use avro/json to define the schema of each message. Think they added a couple of new types in recent versions. When you define your consumer/producer you also tell avro what the schema registry service URL is. Basically it helps you keep your messages in spec. It also has the ability for versioning so you can have different versions of messages in flight on the same topic. So if you add a new field it is fairly trivial for both sides to know what is going on. If you remove a field then it becomes more tricky.

Now what goes into that registry is typically in some way version controlled. That is for 'rebuilding' a clean environment if needed. Or part of a CI/CD system.