Used stubby at Google (mainly java), and was intimidated first, then saw the light - when almost everything uses the same way of talking, not only you get C++, java, python, go and other languages speaking freely to each other, but other extra benefits - for example each RPC can carry as a "tag" (key/value?) the user/group it came from, and this can be used for budgeting:
For example - your internal backend A, calls service B, which then calls some other service C - so it's easy to log that A called B, but the fact that C was called because of what A asked is not, though if propagated (through the tag) then C can report - well I was called by B, but that was on on A to pay.
Then dapper, their distributed tracing was helpful in the few times I had to deal with oncall (actually them asking me to do it). And in general, it felt like you don't have to write any low-level sockets code (which I love)
Unfortch, gRPC brings none of these things. If you want delegated caller, originator, trace ID, or any other kind of baggage propagated down your RPC call graph, you are doing it yourself with metadata injectors and extractors at every service boundary.
I'm just going to jump in with an utterly pointless "woohoo!" As a gRPC shop, this is going to open up a lot of options for both our own infra and make it easier to support clients on AWS. Now if only Azure would make it easy to implement solutions that leverage gRPC...
AWS is suffering from a TLA problem. gRPC on the other hand is a decent name, at least you can guess at a glance that is a RPC protocol. Meanwhile you just have to know that ALB is a type of ELB.
- How should I pass an argument? Let me count the many ways:
1. Path parameters
2. Query parameters in the URL
3. Query parameters in the body of the request
4. JSON/YAML/etc. in the body of the request
5. Request headers (yes, people use these for API tokens, API versions, and other things sometimes)
- There's also the REST verb that is often super arbitrary. PUT vs POST vs PATCH... so many ways to do the same thing.
- HTTP response codes... so many numbers, so little meaning! There are so many ways to interpret a lot of these codes, and people often use 200 where they really "should" use 202... etc. Response codes other than 200 and 500 are effectively never good enough by themselves, so then we come to the next part:
- HTTP responses. Do we put the response in the body using JSON, MessagePack, YAML, or which format do we standardize on? Response headers are used for... some things? Occasionally, responses like HTTP redirects will often just throw HTML into API responses where you're normally using JSON.
- Bonus round: HTTP servers will often support compressing the response, but almost never do they allow compressing the request body, so if you're sending large requests frequently... well, oops.
I don't personally have experience with gRPC, but REST APIs can be a convoluted mess, and even standardizing internally only goes so far.
I like the promise of gRPC, where it handles all of the mundane transport details for me, and as a bonus... it will generate client implementations from a definition file, and stub out the server implementation for me... in whatever languages I want.
I work in a startup which is ~10 months old where we've decided to go all in on gRPC for all communications, both inter-service and client (Web SPA and a CLI) to service.
Although investment in tooling had been significant in the beginning it has truly paid off its dividends now as we can develop in Golang (micro services, CLI), Javascript (SPA) and Python (end to end testing framework), and have a single definition for all our API endpoints and messages in the form of Protobufs. These protobufs automatically generate all client and server code and give us out-of-the-box backward and forward compatibility, increased performance due to binary format over the wire and more..
- You want to have inter-service RPC.
- You want not only unary calls, but also bidirectional streaming.
- You want a well-defined schema for your RPC data and methods.
- You want a cross-language solution that guarantees interoperability (no more JSON parsing differences! [1])
If you're communicating between two systems, gRPC has a few benefits:
* keeps a socket open between them (HTTP/2) and puts all your method calls on that connection. So you don't have to set up connections on each call or handle your own pooling.
* comes with builtin fast (de &)serialization using protobuf.
* uses a definition language to generate SDKs for a whole bunch of languages.
* makes testing super easy because your testing team, if you have a separate one, can make an SDK in their preferred language and write tests.
Much better developer experience and performance writing HTTP services and code to call them.
Cons are
* not being able to use Postman / firebug, nothing on the wire is human-readable
* load balancer support is sketchy because of the use of HTTP trailers and full path HTTP/2. That's why AWS ALB supporting it is news.
* The auth story isn't very clear. Do you use the out of band setup or add tokens in every single RPC?
It makes it really nice to define APIs (like with Openapi of swagger). There is a bunch of code generators out there to produce code for your definitions to have a native swift , objective , Java, Go api stubs for either clients or servers.
It is a joy to work with in cross functional teams and define your APIs whilst taking into account what Api versioning would mean, how to define enums, how to rename field names whilst being compatible with the transport protocol and other things.
Also if you were to route a payload from service A via B to C and each service is deployed independently and gets new Api changes, gRPC supports you in how to handle ther Szenarios.
Sure enough openapi can do all of this I guess but grpc definitions in Protobuf or Google artman are just way quicker to understand and work with. (At least for me)
1. It's standardized all the implementation for each language is roughly similar and has the same feature sets (middlewares, stubs, retry, hedging, timeouts, deadlines, etc).
2. High performance framework/webserver in "every" language. No more "should I use flask or the built in http server or gunicorn or waitress or..."
3. Tooling can be built around schemas as code. There's a great talk that I highly recommend about some of the magic you can do [0].
4. Protos are great for serialization and not just over a network! Need to store configs or data in a range of formats (json, prototxt, binary, yaml)?
5. Streaming. It's truely amazing and can dramatically reduce latency if you need to do a lot of collection-of-things or as-soon-as-x processing.
6. Lower resource usage. Encoding/decoding protos is faster then encoding and decoding json. At high throughput that begins to matter.
7. Linting & standards can be defined and enforced programatically [1]
8. Pressures people to document things. You comment your .c or .java code, why not comment your .proto?
Serialization/deserialization speed and reducing transfer size are good reasons for large throughput service-to-service communication. Also a decent ecosystem around code generation from .proto files and gateways to still support some level of JSON-based calls.
HTTP is super great for loosely coupled, request-based services.
RPC is more lightweight for persistent connections to stateful services. RPC makes broadcast easier than HTTP. Individual RPC requests have (much) less overhead than HTTP requests, which is very helpful when tight coupling is acceptable.
Trying to run, say, MMO gaming servers over HTTP is an exercise in always paying double for everything you want to do. (Also, trying to run a FPS gaming server over TCP instead of UDP is equally not the right choice!)
I started a recent project with gRPC but wound up moving to fbthrift after having a bad time with the C++ async server story. Overall I’d like to be using gRPC because the fbthrift documentation is weak, but thread-per-request is a non-starter for some use cases. From the gRPC source it looks like they’ve got plans to do something about it but it seems a ways off.
Really illustrates the dumbassery of sticking a (relatively) fast-moving application-layer protocol into the kernel. Now you can't update the Web Server without updating the operating system.
Might have been handy to beat benchmarks back in the day when people liked to whip them out for comparison, but IIS is under 10% according to Netcraft now. Time to fold up the tent and go home.
I suppose .Net Core is sticking with Http.sys to avoid implementing their own web server, but is tying yourself to the Windows shipping cycle worth it ?
Does anyone have experience (good, bad, otherwise) using the gRPC JSON transcoding option for real-world stuff? I'm debating using it (still need REST clients sometimes) but I'm not sure how hacky it is.
We use it. It's pretty good. It has a lot of places you can hook in extra functionality. You get most of the HTTP error status codes for free, but we also have a filter that looks at outgoing protobuf messages for a certain field that indicates the messages is a response to a create request, and that allows us to return an HTTP 202 instead of 200. We were even able to do Amazon-style request signing. One thing about request signing is that if you use protobuf key-value maps, the order is not deterministic on the wire. This broke our signing. Key-value maps are kind of a protobuf hack anyway, so we ended up using an array of structs. When it came time to add the JSON gateway, we found it pretty easy to write custom JSON serialization/deserialization code to convert the structs to a JSON map. This is all in Go by the way.
They say not to rely on the output being stable, so I would recommend guaranteeing a stable translation yourself for a REST client. You can achieve this by translating from the JSON to your proto or grpc service structure yourself.
The grpc Gateway in Go worked quite well for us.
I have not tried the native Envoy decoding functionality, yet.
Also you should look at Google Artman on github/googleapis as sometimes it felt that defining the REST mappings in Protobuf were lacking some features.
Using google artman you kind of mix/match Protobuf with yaml definitions of your service.
We never had to use it, though. It just depends on where you want to put your authentication information.
As of today I would probably change my mind and make it explicitly in payload, I.e. Protobuf message and not fiddle with headers any more.
Maybe I’m crazy, but here is something I have been toying with recently. I have defined services in protobuff and generated static typescript definitions for the services and associated messages. I then implemented my own flavor of RPC over a WebSocket connection, where RPC calls are implemented as two calls— a “start” call from client to server, and a “result” call from server to client. It’s interesting and I don’t know if I would go this far down to the “metal” if you will on a team, but for my own project it’s been interesting.
2. It's hard to make changes that are backwards incompatible via protobuf (reduces significant source of bugs)
3. Great, standardized observability for every service. Small services don't really need too many custom metrics, since we log a LOT of metrics at the RPC layer
4. Standardization at the RPC layer lets us build useful generic infrastructure - like a load testing framework (where users only need to specify the RPC service, method, and parameters like concurrency, RPS).
It's a generic RPC protocol based on a well-enough-typed serialization format (protobuf) that is battle-tested. You'd use it where you'd use REST/API/JSONRPC/...
Compared to plain JSON/REST RPC, it has all the advantages of protobuf over JSON (ie. strong typing, client/server code generation, API evolution, etc), but also provides some niceties at the transport layer: bidirectional streaming, high quality TCP connection multiplexing, ...
For hypermedia/hypertext Fielding made a solid argument for preferring REST over RPC (mostly because of caching). I still recommend reading his very approachable thesis - these days not so much for the web/REST architecture, but for the other ones, which include modern SPAs (they're not great as hypermedia apps, but fine as networked applications):
Apart from caching (and cache invalidation) it's accepted that making the network invisible is a bad idea - it will lead to issues. So remote procedure calls aren't inherently bad, but blurring the distinction too much between local procedure calls and remote ones aren't a great idea. From hung nfs mounts to unpredictable application performance, it is unpleasant.
This Netflix talk on their not-graphql framework gives a very nice summary of when and how you might prefer RPC to REST:
In many ways comparing REST and gRPC is apples-to-oranges. You can design a gRPC to work according to REST principles, and it is actually generally encourages to do so
And more to the point, the vast majority of "REST APIs" I've experienced in the wild are just RPC-style APIs that use JSON.
RPC is apparently en vogue again. Everything new is old.
It’s a pretty decent implementation of the pattern though. Efficient binary protocol (unlike SOAP), built in security and none one of the complexity of object request brokers. Although you might actually want that and you’ll likely end up with something complex like Istio.
gRPC is not necessarily binary. It is often conflated with protobuf but it is in fact payload format agnostic. You can run it with JSON payloads if you want.
It will probably be a while. We've been evaluating Quic and the ecosystem just isn't quite ready. We opted to release UDP support instead, so apps that want Quic can do it, but we can avoid adding much extra plumbing in front of the simple HTTP apps.
I'm in the process process of advocating gRPC to my company that is starting to lay down the foundations to scale up. This presentation comes in handy.
Also the blog posts on grpc.io are interesting, but I find them harder to discover whilst reading the documentation. But here they are: https://grpc.io/blog/
Grasping the concept of a context/deadlines is quite helpful:
But in any case, gRPC is language agnostic and has nothing to do with Go.
To get an idea how to create an api-repository with protobuf defintions to be shared by multiple services/clients, one can look at: https://github.com/googleapis/googleapis
[+] [-] malkia|5 years ago|reply
For example - your internal backend A, calls service B, which then calls some other service C - so it's easy to log that A called B, but the fact that C was called because of what A asked is not, though if propagated (through the tag) then C can report - well I was called by B, but that was on on A to pay.
Then dapper, their distributed tracing was helpful in the few times I had to deal with oncall (actually them asking me to do it). And in general, it felt like you don't have to write any low-level sockets code (which I love)
[+] [-] jeffbee|5 years ago|reply
[+] [-] psnosignaluk|5 years ago|reply
[+] [-] tchalla|5 years ago|reply
[+] [-] wongarsu|5 years ago|reply
[+] [-] diogenesjunior|5 years ago|reply
[+] [-] coder543|5 years ago|reply
- How should I pass an argument? Let me count the many ways:
- There's also the REST verb that is often super arbitrary. PUT vs POST vs PATCH... so many ways to do the same thing.- HTTP response codes... so many numbers, so little meaning! There are so many ways to interpret a lot of these codes, and people often use 200 where they really "should" use 202... etc. Response codes other than 200 and 500 are effectively never good enough by themselves, so then we come to the next part:
- HTTP responses. Do we put the response in the body using JSON, MessagePack, YAML, or which format do we standardize on? Response headers are used for... some things? Occasionally, responses like HTTP redirects will often just throw HTML into API responses where you're normally using JSON.
- Bonus round: HTTP servers will often support compressing the response, but almost never do they allow compressing the request body, so if you're sending large requests frequently... well, oops.
I don't personally have experience with gRPC, but REST APIs can be a convoluted mess, and even standardizing internally only goes so far.
I like the promise of gRPC, where it handles all of the mundane transport details for me, and as a bonus... it will generate client implementations from a definition file, and stub out the server implementation for me... in whatever languages I want.
Why wouldn't you want that?
[+] [-] hagsh|5 years ago|reply
Although investment in tooling had been significant in the beginning it has truly paid off its dividends now as we can develop in Golang (micro services, CLI), Javascript (SPA) and Python (end to end testing framework), and have a single definition for all our API endpoints and messages in the form of Protobufs. These protobufs automatically generate all client and server code and give us out-of-the-box backward and forward compatibility, increased performance due to binary format over the wire and more..
Our Architect which put together most of this infrastructure has written an entire series of blog posts about how we use gRPC in practice, detailing our decisions and tooling: https://stackpulse.com/blog/tech-blog/grpc-in-practice-intro...
https://stackpulse.com/blog/tech-blog/grpc-in-practice-direc...
https://stackpulse.com/blog/tech-blog/grpc-web-using-grpc-in...
[+] [-] q3k|5 years ago|reply
[+] [-] sudhirj|5 years ago|reply
Much better developer experience and performance writing HTTP services and code to call them.
Cons are * not being able to use Postman / firebug, nothing on the wire is human-readable * load balancer support is sketchy because of the use of HTTP trailers and full path HTTP/2. That's why AWS ALB supporting it is news. * The auth story isn't very clear. Do you use the out of band setup or add tokens in every single RPC?
[+] [-] weitzj|5 years ago|reply
Sure enough openapi can do all of this I guess but grpc definitions in Protobuf or Google artman are just way quicker to understand and work with. (At least for me)
[+] [-] gravypod|5 years ago|reply
1. It's standardized all the implementation for each language is roughly similar and has the same feature sets (middlewares, stubs, retry, hedging, timeouts, deadlines, etc).
2. High performance framework/webserver in "every" language. No more "should I use flask or the built in http server or gunicorn or waitress or..."
3. Tooling can be built around schemas as code. There's a great talk that I highly recommend about some of the magic you can do [0].
4. Protos are great for serialization and not just over a network! Need to store configs or data in a range of formats (json, prototxt, binary, yaml)?
5. Streaming. It's truely amazing and can dramatically reduce latency if you need to do a lot of collection-of-things or as-soon-as-x processing.
6. Lower resource usage. Encoding/decoding protos is faster then encoding and decoding json. At high throughput that begins to matter.
7. Linting & standards can be defined and enforced programatically [1]
8. Pressures people to document things. You comment your .c or .java code, why not comment your .proto?
[0] - https://youtu.be/j6ow-UemzBc?t=435
[1] - https://google.aip.dev/
[+] [-] didip|5 years ago|reply
[+] [-] jruroc|5 years ago|reply
[+] [-] jwatte|5 years ago|reply
RPC is more lightweight for persistent connections to stateful services. RPC makes broadcast easier than HTTP. Individual RPC requests have (much) less overhead than HTTP requests, which is very helpful when tight coupling is acceptable.
Trying to run, say, MMO gaming servers over HTTP is an exercise in always paying double for everything you want to do. (Also, trying to run a FPS gaming server over TCP instead of UDP is equally not the right choice!)
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] wbl|5 years ago|reply
Seriously binary RPC has been around for ages.
[+] [-] benreesman|5 years ago|reply
[+] [-] jeffbee|5 years ago|reply
[+] [-] apta|5 years ago|reply
[+] [-] lvice|5 years ago|reply
[+] [-] muststopmyths|5 years ago|reply
Might have been handy to beat benchmarks back in the day when people liked to whip them out for comparison, but IIS is under 10% according to Netcraft now. Time to fold up the tent and go home.
I suppose .Net Core is sticking with Http.sys to avoid implementing their own web server, but is tying yourself to the Windows shipping cycle worth it ?
[+] [-] sneak|5 years ago|reply
Does anyone have experience (good, bad, otherwise) using the gRPC JSON transcoding option for real-world stuff? I'm debating using it (still need REST clients sometimes) but I'm not sure how hacky it is.
[+] [-] et1337|5 years ago|reply
[+] [-] mariojv|5 years ago|reply
They say not to rely on the output being stable, so I would recommend guaranteeing a stable translation yourself for a REST client. You can achieve this by translating from the JSON to your proto or grpc service structure yourself.
[+] [-] weitzj|5 years ago|reply
Also you should look at Google Artman on github/googleapis as sometimes it felt that defining the REST mappings in Protobuf were lacking some features. Using google artman you kind of mix/match Protobuf with yaml definitions of your service.
We never had to use it, though. It just depends on where you want to put your authentication information. As of today I would probably change my mind and make it explicitly in payload, I.e. Protobuf message and not fiddle with headers any more.
[+] [-] booi|5 years ago|reply
[+] [-] human_error|5 years ago|reply
[+] [-] corytheboyd|5 years ago|reply
[+] [-] foota|5 years ago|reply
[+] [-] fbru02|5 years ago|reply
[+] [-] k__|5 years ago|reply
What's the main use-case for gRPC?
I had the impression RPC was seen as a mistake.
Sure, gRPC also uses a binary protocol, but that doesn't seem like a USP of gRPC. Why didn't they went fron non-RPC binary?
Serious question! It sounds a bit counterinuitive to me at the first glance.
[+] [-] ublaze|5 years ago|reply
1. Performance
2. It's hard to make changes that are backwards incompatible via protobuf (reduces significant source of bugs)
3. Great, standardized observability for every service. Small services don't really need too many custom metrics, since we log a LOT of metrics at the RPC layer
4. Standardization at the RPC layer lets us build useful generic infrastructure - like a load testing framework (where users only need to specify the RPC service, method, and parameters like concurrency, RPS).
[+] [-] q3k|5 years ago|reply
Compared to plain JSON/REST RPC, it has all the advantages of protobuf over JSON (ie. strong typing, client/server code generation, API evolution, etc), but also provides some niceties at the transport layer: bidirectional streaming, high quality TCP connection multiplexing, ...
[+] [-] e12e|5 years ago|reply
For hypermedia/hypertext Fielding made a solid argument for preferring REST over RPC (mostly because of caching). I still recommend reading his very approachable thesis - these days not so much for the web/REST architecture, but for the other ones, which include modern SPAs (they're not great as hypermedia apps, but fine as networked applications):
https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm
Apart from caching (and cache invalidation) it's accepted that making the network invisible is a bad idea - it will lead to issues. So remote procedure calls aren't inherently bad, but blurring the distinction too much between local procedure calls and remote ones aren't a great idea. From hung nfs mounts to unpredictable application performance, it is unpleasant.
This Netflix talk on their not-graphql framework gives a very nice summary of when and how you might prefer RPC to REST:
https://youtu.be/hOE6nVVr14c
[+] [-] thinkharderdev|5 years ago|reply
And more to the point, the vast majority of "REST APIs" I've experienced in the wild are just RPC-style APIs that use JSON.
[+] [-] mnd999|5 years ago|reply
It’s a pretty decent implementation of the pattern though. Efficient binary protocol (unlike SOAP), built in security and none one of the complexity of object request brokers. Although you might actually want that and you’ll likely end up with something complex like Istio.
[+] [-] wongarsu|5 years ago|reply
Aren't REST and webhooks just RPC protocols too?
[+] [-] jeffbee|5 years ago|reply
[+] [-] snipewheelcelly|5 years ago|reply
[+] [-] forty|5 years ago|reply
[+] [-] wmf|5 years ago|reply
[+] [-] weitzj|5 years ago|reply
When does AWS roll out quic support in ALBs?
[+] [-] mrkurt|5 years ago|reply
It will probably be a while. We've been evaluating Quic and the ecosystem just isn't quite ready. We opted to release UDP support instead, so apps that want Quic can do it, but we can avoid adding much extra plumbing in front of the simple HTTP apps.
Given how much AWS is investing in Rust, they'll probably ship first class support for Quic when Hyper does (same as us!): https://github.com/hyperium/hyper/issues/2078
[+] [-] calcifer|5 years ago|reply
[+] [-] ainiriand|5 years ago|reply
[+] [-] bwarren2|5 years ago|reply
[+] [-] weitzj|5 years ago|reply
Also the blog posts on grpc.io are interesting, but I find them harder to discover whilst reading the documentation. But here they are: https://grpc.io/blog/
Grasping the concept of a context/deadlines is quite helpful:
https://grpc.io/blog/deadlines/
You could also find related information in the Google SRE Handbook (Service Level Objectives): https://landing.google.com/sre/sre-book/chapters/service-lev...
If you are familiar with Go, the article about "Context" might also be helpful: https://blog.golang.org/context
But in any case, gRPC is language agnostic and has nothing to do with Go.
To get an idea how to create an api-repository with protobuf defintions to be shared by multiple services/clients, one can look at: https://github.com/googleapis/googleapis