top | item 47014750

(no title)

PessimalDecimal | 15 days ago

> Everything was good in the begining, as long as everyone submits their .proto to a centralized repo. Once the one team starts to host their own, things get broken quickly.

Is this an issue with protobufs per se though? It's a data schema. How are people supposed to develop to a shared schema if a team doesn't - you know - share their schema? That could happen with any other particular choice for how schemas are defined.

discuss

order

ragall|15 days ago

It's a problem with PB because it requires everything to be typed (unless you use Any), which requires all middleware to eagerly type check all data passing through. With JSON, validation will be typically done only by the endpoints, which allows for much faster development.

There was a blog a few years ago, where an engineer working on the Google Cloud console was complaining that simply adding a checkbox to one of the pages required modifying ~20 internal protos and 6 months of rollout. That's an obvious downside that I wish I knew how to fix.

PessimalDecimal|15 days ago

My guess is there's more to that story than just "protobufs don't forward unknown fields" because that's not how they work be default. Take a look at https://protobuf.dev/programming-guides/proto3/#unknowns.

https://kmcd.dev/posts/protobuf-unknown-fields/ discusses the scenario you're hinting at.

It's possible in the story you mention that each of those ~20 internal protos were different messages, and each hop between backends was translating data between nearly identical schemas. In that case, they'd all need to be updated to transport that data. But that's different and the result of those engineers' choice for how to structure their service definitions.