(no title)
evanrich | 4 years ago
We used Kafka for event-driven micro services quite a bit at Uber. I lead the team that owned schemas on Kafka there for a while. We just did not accept breaking schema changes within the topic. Same as you would expect from any other public-facing API. We also didnt allow multiplexing the topic with multiple schemas. This wasn’t just because it made my life easier. A large portion of the topics we had went on to become analytical tables in Hive. Breaking changes would break those tables. If you absolutely have to break the schema, make a new topic with a new consumer and phase out the old. This puts a lot of onus on the producer, so we tried to make tools to help. We had a central schema registry with the topics the schemas paired to that showed producers who their consumers were, so if breaking changes absolutely had to happen, they knew who to talk to. In practice though, we never got much pushback on the no-breaking changes rule.
DLQ practices were decided by teams based on need, too many things there to consider to make blanket rules. When in your code did it fail? Is this consumer idempotent? Have we consumed a more recent event that would have over-written this event? Are you paying for some API that your auto-retry churning away in your DLQ is going to cost you a ton of money? Sometimes you may not even want a DLQ, you want a poison pill. That lets you assess what is happening immediately and not have to worry about replays at all.
I hope one of the books you are talking about is Designing Data Intensive Applications, because it is really fantastic. I joke that it is frustrating that so much of what I learned over years on the data team could be written so succinctly in a book.
sideway|4 years ago
Two follow up questions if you don't mind me asking, even though I understand you were not on the publishing side:
1. Do you know if changes in the org structure (e.g. when uber was growing fast and - I guess - new teams/product were created and existing teams/products were split) had significant effect on the schemas that had been published since then? For example, when a service is split into two and the dataset of the original service is now distributed, what pattern have you seen working sufficiently well for not breaking everyone downstream?
2. Did you have strong guidelines on how to structure events? Were they entity-based with each message carrying a snapshot of the state of the entities or action-based describing the business logic that occurred? Maybe both?
And yes, one of the books I'm talking about is indeed Designing Data Intensive Applications and I fully agree with you that it's a fantastic piece of work.
evanrich|4 years ago
For 2, it completely depends on the source of the trigger. The first event in a chain probably only has enough information to know that it should produce an event, usually as quickly possible, so no additional db or api fetches. So you might get something in the driver status topic that contains {driver_uuid, new_status, old_status}, then based on what downstream consumers may want to do in response to that event, you may need more info, so you may get more entity information in derived topics. Even pure-entity-based messages would have needed a trigger, so in our topics that tail databases, you may have the full row as a message along with the action that occurred like {op: insert, msg: {entity data… }}.
mk89|4 years ago
kieranmaine|4 years ago
sumtechguy|4 years ago
Now what goes into that registry is typically in some way version controlled. That is for 'rebuilding' a clean environment if needed. Or part of a CI/CD system.