Everyone is hating on gRPC in this thread, but I thought I'd chime in as to where it shines. Because of the generated message definition stubs (which require additional tooling), clients almost never send malformed requests and the servers send a well understood response.
This makes stable APIs so much easier to integrate with.
> Because of the generated message definition stubs (which require additional tooling), clients almost never send malformed requests and the servers send a well understood response.
Sure. Until you need some fields to be optional.
> This makes stable APIs so much easier to integrate with.
Only on your first iteration. After a year or two of iterating you're back to JSON, checking if fields exist, and re-validating your data. Also there's a half dozen bugs that you can't reproduce and you don't know why they happen, so you just work around them with retries.
There’s also a gaping security hole in its design.
They don’t have sane support for protocol versioning or required fields, so every field of every type ends up being optional in practice.
So, if a message has N fields, there are 2^N combinations of fields that the generated stubs will accept and pass to you, and its up to business logic to decide which combinations are valid.
It’s actually worse than that, since the other side of the connection could be too new for you to understand. In that case, the bindings just silently accept messages with unknown fields, and it’s up to you to decide how to handle them.
All of this means that, in practice, the endpoints and clients will accumulate validation bugs over time. At that point maliciously crafted messages can bypass validation checks, and exploit unexpected behavior of code that assumes validated messages are well-formed.
I’ve never met a gRPC proponent that understands these issues, and all the gRPC applications I’ve worked with has had these problems.
inetknght|1 year ago
Sure. Until you need some fields to be optional.
> This makes stable APIs so much easier to integrate with.
Only on your first iteration. After a year or two of iterating you're back to JSON, checking if fields exist, and re-validating your data. Also there's a half dozen bugs that you can't reproduce and you don't know why they happen, so you just work around them with retries.
hedora|1 year ago
They don’t have sane support for protocol versioning or required fields, so every field of every type ends up being optional in practice.
So, if a message has N fields, there are 2^N combinations of fields that the generated stubs will accept and pass to you, and its up to business logic to decide which combinations are valid.
It’s actually worse than that, since the other side of the connection could be too new for you to understand. In that case, the bindings just silently accept messages with unknown fields, and it’s up to you to decide how to handle them.
All of this means that, in practice, the endpoints and clients will accumulate validation bugs over time. At that point maliciously crafted messages can bypass validation checks, and exploit unexpected behavior of code that assumes validated messages are well-formed.
I’ve never met a gRPC proponent that understands these issues, and all the gRPC applications I’ve worked with has had these problems.
abalaji|1 year ago
https://stackoverflow.com/a/62566052
When your API changes that dramatically, you should use a new message definition on the client and server and deprecate the old RPC.
matrix87|1 year ago
Every time this has happened to me, it's because of one-sided contract negotiation and dealing with teams where their incentives are not aligned
i.e. they can send whatever shit they want, and we have to interpret it and make it work