Finally! Up until now, when people ask how they are supposed to proxy grpc traffic, we could only recommend Envoy. Pretty much no one wants to hear that they have to change their stack to use new technology. Since a large part of the world is already on nginx, this was a a real barrier for adoption.
Please! There is a working TypeScript client implementation [0] of gRPC-Web [1], which relies on a custom proxy for converting gRPC to gRPC-Web [2]. Would be nice to bring that proxy functionality into Nginx.
If gRPC would have been designed slightly different, they could have had good proxy support AND browser support right from the start.
E.g. it's already based on top of HTTP(/2), and uses normal path for distinguishing methods, which would actually be a good prerequisite to make it work everywhere. But then OTOH it uses barely support HTTP features like trailers, which require very special HTTP libraries and are not universally supported. If the status codes there would have been implemented as just another chunk of the HTTP body, and if some other small changes had been done, we could have had gprc from browsers already a long time ago. I guess that's what grpc-web now tries to fix, but I haven't dug into that in detail.
For the record, the reason grpc uses trailers is because it uses http/2, not the other way around. It was expected that since the whole transport was completely new, adopters of http/2 would add trailer support. As it turns out, they mostly didn't. Particularly Firefox and Chrome did not expose trailers. This is even despite being part of the new Fetch API.
If you want to have a set of globally defined types and/or language-independent types to share between your various programs or services, gRPC and Protobufs are a good option.
Also, anywhere that you might use RPC you could use gRPC. It has a compact wire format and is pretty user-friendly as far as designing your RPC req/rep types.
I used gRPC for numerous hobby projects during my undergrad to glue together binaries running in different languages (e.g. a simulation server running in C++ and a scripting client in Python). By passing around a shared data structure (Protobufs), one does not need to waste time writing serialization/de-serialization adapters. It is also useful for gluing together microservices.
FB's Thrift also solves the same problem, and is an alternative to gRPC.
Latency-sensitive / chatty microservices can benefit greatly. Some of this is by nature of http2 but it’s extended by protocol buffer packaging of messages and other client smarts. Inter-service comms is where this popped onto my radar recently.
You get type safety in your API, you get autogenerated client code, and you get http2 out of the box.
Personally I find the autogenerated client code to be the biggest upside. Anyone who wants to use your API, in any language supported by the RPC, can start doing it with very little work. Gone are the days of maintaining officially-supported client libraries.
No disrespect intended, but I find this comment pretty funny. SOAP/XML has been exactly this for 20 years. It definitely has some major warts, but gRPC isn’t doing anything new.
Does this mean anything for http2, specifically anything for support for http2 upstreams? I would imagine that’s was a necessity to support for grpc so any way that will come to generic http2 as well?
Not sure if you're looking for an explanation of RPC in general or just specifically how gRPC does it, but I guess I'll kind of cover both.
You define a series of set method calls, using a custom language. Each method call has a single message as its request and another message as its response. (You can actually get fancier than this, but you usually don't.)
In gRPC, the messages are usually protocol buffers (though other formats are supported like JSON). The method calls are organized into groups called services.
You stick these definitions in a file, then run a tool that takes these definitions and generates code in your desired language (Java, Python, etc. -- gRPC supports many languages). This code allows you to build objects that will get turned into protocol buffers wire format and sent across from client to server and back.
So for example, if you define a method Foo that takes a FooRequest and returns a FooResponse, you would put this a definition file, run a tool that generates some code. For the sake of this example, we'll say you're using Java for everything, so you tell the tool to generate Java code. This generated Java code would include code to create a FooRequest object, set values in it (strings, ints, etc.). It would also include a Java method you can call that takes your FooRequest and sends it to the server and that gives you back a FooResponse after the server responds. On the server side, you also get Java code that is generated to help you respond to this request. Your Java code on the server side will receive a FooRequest, and it can use generated Java code to read the fields out of it (those same strings, ints, etc.), and then it can build a response in the same way that the client built the request.
On the client, there is obviously some work involved in opening connections to the server, converting the FooRequest into wire-format data (and vice versa for FooResponse), but that is done for you, and you just need to tell it the server's address. On the server, there is work involved in listening for connections from clients, figuring out which RPC method is being called and routing it to the right Java method, converting the wire-format data into objects (and vice versa), but all that is done for you, and you just need to tell it what port to listen on.
gRPC itself uses HTTP/2 and makes POST calls when your client calls a method. The methods and services you define are mapped to URLs. So if you define a Bar service with a Foo method inside, it will be turned into /Bar/Foo when the HTTP call is made.
short: protobuf based (so relatively language agnostic) rpc mechanisms that communicates over http2
slightly longer: one writes a protobuf that gets compiled into a language specific server and client code. All you have to do is implement the server functions or call the generated client functions to make rpc calls.
This is great! TL;DR: instead of building JSON or GraphQL API now you can easily expose your gRPC service to the outside world!
We use gRPC in my company. We're happy but some things were not easy or straightforward to implement. With this update nginx makes load balancing and authentication easier to implement.
I was actually just about to say that. There seems to be fair bit of overlap in graphql and grpc from the Codegen and Schema POV. Graphql is more intended to be a data query language and grpc more generic function calls.
But they’re both essentially solving the dev problem of having ‘typed contracts’ right?
Very interesting evolution! SOAP to REST to graphql more popular on the ‘frontend’ side and grpc on the ‘backend’ side.
haproxy has had http/2 supports beginning 1.6 ( May 2015) however I have not yet looked into using it. I do wonder if it posses the same features in terms of inspecting the method names on a per request basis.
This is Google trying to thin end of the wedge their own proprietary protocols into web standards yet again.
My idea of a good time is not a future where the internet is built using Google technologies dressed as "open technologies".. that.. uh-huh just happen to be the exact same as infrastructure protocols that span the internal Googleverse.
Besides that, protobuf and its ilk aren't even good or modern.
People who say, yay look at it growing are very naive imho
The difference is between open (which gRPC is) and proprietary. The origination doesn't really matter. Lots of great open tech has come from Google, Microsoft, Apple, Amazon, Facebook, Netflix, Github, etc. Almost all the big projects started at a big company that needed to get something done and had the resources to create something new.
I'd rather the industry pick something and actually standardize instead of reinventing the same thing repeatedly just for some philosophical reasons.
There might be some problems with it, it is not perfect, however it is already open, not bound to Google and we used in internal projects, and multiple very large projects also use it freely. I really like the strong typing and the ability to bi-directional streams and finally code generation for various languages make it very okay. Our main product was Go, but a part had to be Java and the integration was very easy because we used gRPC. Not that it cannot be achieved by other tools and frameworks, gRPC is already popular and well performing enough for most people.
"Not cool" is telling the long-term maintainers of software what "has no place in their software" and not to fulfill user/customer requests because that'd be helping a company you don't like.
Google's influence in many parts is a problem, but people using an internal protocol they've made up is basically irrelevant IMHO, unless you have a really good argument why it is a problem.
[+] [-] anameaname|8 years ago|reply
Next up, browser support?
[+] [-] pacala|8 years ago|reply
Please! There is a working TypeScript client implementation [0] of gRPC-Web [1], which relies on a custom proxy for converting gRPC to gRPC-Web [2]. Would be nice to bring that proxy functionality into Nginx.
[0] https://github.com/improbable-eng/grpc-web/tree/master/ts
[1] https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-WEB.md
[2] https://github.com/improbable-eng/grpc-web/tree/master/go/gr...
[+] [-] rubiquity|8 years ago|reply
[+] [-] jacques_chester|8 years ago|reply
[+] [-] Matthias247|8 years ago|reply
E.g. it's already based on top of HTTP(/2), and uses normal path for distinguishing methods, which would actually be a good prerequisite to make it work everywhere. But then OTOH it uses barely support HTTP features like trailers, which require very special HTTP libraries and are not universally supported. If the status codes there would have been implemented as just another chunk of the HTTP body, and if some other small changes had been done, we could have had gprc from browsers already a long time ago. I guess that's what grpc-web now tries to fix, but I haven't dug into that in detail.
[+] [-] anameaname|8 years ago|reply
[+] [-] mratzloff|8 years ago|reply
[+] [-] x25519|8 years ago|reply
Initial commit: https://hg.nginx.org/nginx/rev/2713b2dbf5bb
Additional features: https://hg.nginx.org/nginx/rev/c693daca57f7 and https://hg.nginx.org/nginx/rev/c2a0a838c40f
[+] [-] nginxgrpc|8 years ago|reply
Can any of you tell if it includes unit tests? I didn't see any.
[+] [-] andrewstuart|8 years ago|reply
[+] [-] dbmikus|8 years ago|reply
Also, anywhere that you might use RPC you could use gRPC. It has a compact wire format and is pretty user-friendly as far as designing your RPC req/rep types.
[+] [-] ericjang|8 years ago|reply
FB's Thrift also solves the same problem, and is an alternative to gRPC.
[+] [-] Thaxll|8 years ago|reply
[+] [-] awinder|8 years ago|reply
[+] [-] _skel|8 years ago|reply
Personally I find the autogenerated client code to be the biggest upside. Anyone who wants to use your API, in any language supported by the RPC, can start doing it with very little work. Gone are the days of maintaining officially-supported client libraries.
[+] [-] theshrike79|8 years ago|reply
[+] [-] toprerules|8 years ago|reply
[+] [-] adamkl|8 years ago|reply
[+] [-] perfmode|8 years ago|reply
[+] [-] whyrusleeping|8 years ago|reply
[+] [-] awinder|8 years ago|reply
[+] [-] chuckdries|8 years ago|reply
[+] [-] adrianmonk|8 years ago|reply
You define a series of set method calls, using a custom language. Each method call has a single message as its request and another message as its response. (You can actually get fancier than this, but you usually don't.)
In gRPC, the messages are usually protocol buffers (though other formats are supported like JSON). The method calls are organized into groups called services.
You stick these definitions in a file, then run a tool that takes these definitions and generates code in your desired language (Java, Python, etc. -- gRPC supports many languages). This code allows you to build objects that will get turned into protocol buffers wire format and sent across from client to server and back.
So for example, if you define a method Foo that takes a FooRequest and returns a FooResponse, you would put this a definition file, run a tool that generates some code. For the sake of this example, we'll say you're using Java for everything, so you tell the tool to generate Java code. This generated Java code would include code to create a FooRequest object, set values in it (strings, ints, etc.). It would also include a Java method you can call that takes your FooRequest and sends it to the server and that gives you back a FooResponse after the server responds. On the server side, you also get Java code that is generated to help you respond to this request. Your Java code on the server side will receive a FooRequest, and it can use generated Java code to read the fields out of it (those same strings, ints, etc.), and then it can build a response in the same way that the client built the request.
On the client, there is obviously some work involved in opening connections to the server, converting the FooRequest into wire-format data (and vice versa for FooResponse), but that is done for you, and you just need to tell it the server's address. On the server, there is work involved in listening for connections from clients, figuring out which RPC method is being called and routing it to the right Java method, converting the wire-format data into objects (and vice versa), but all that is done for you, and you just need to tell it what port to listen on.
gRPC itself uses HTTP/2 and makes POST calls when your client calls a method. The methods and services you define are mapped to URLs. So if you define a Bar service with a Foo method inside, it will be turned into /Bar/Foo when the HTTP call is made.
[+] [-] compsciphd|8 years ago|reply
slightly longer: one writes a protobuf that gets compiled into a language specific server and client code. All you have to do is implement the server functions or call the generated client functions to make rpc calls.
[+] [-] grizzles|8 years ago|reply
It's designed to be very simple to configure. It doesn't support streams yet, but it should soon.
[+] [-] brunosutic|8 years ago|reply
We use gRPC in my company. We're happy but some things were not easy or straightforward to implement. With this update nginx makes load balancing and authentication easier to implement.
[+] [-] tango12|8 years ago|reply
But they’re both essentially solving the dev problem of having ‘typed contracts’ right?
Very interesting evolution! SOAP to REST to graphql more popular on the ‘frontend’ side and grpc on the ‘backend’ side.
[+] [-] merb|8 years ago|reply
[+] [-] sigmonsays|8 years ago|reply
[+] [-] wolfspider|8 years ago|reply
[+] [-] throwawaysunday|8 years ago|reply
There is no place for gRPC in NGINX.
This is Google trying to thin end of the wedge their own proprietary protocols into web standards yet again.
My idea of a good time is not a future where the internet is built using Google technologies dressed as "open technologies".. that.. uh-huh just happen to be the exact same as infrastructure protocols that span the internal Googleverse.
Besides that, protobuf and its ilk aren't even good or modern.
People who say, yay look at it growing are very naive imho
[+] [-] masklinn|8 years ago|reply
Meh. There's place in NGINX for Adobe HDS[0], flv streaming[1], JWT[2], memcached[3], flash mp4 pseudo-streaming[4] and XLST[5].
Hell, spdy draft 3.1 is still supported[6]…
[0] http://nginx.org/en/docs/http/ngx_http_f4f_module.html
[1] http://nginx.org/en/docs/http/ngx_http_flv_module.html
[2] http://nginx.org/en/docs/http/ngx_http_auth_jwt_module.html
[3] http://nginx.org/en/docs/http/ngx_http_memcached_module.html
[4] http://nginx.org/en/docs/http/ngx_http_mp4_module.html
[5] http://nginx.org/en/docs/http/ngx_http_xslt_module.html
[6] http://nginx.org/en/docs/http/ngx_http_spdy_module.html
[+] [-] dankohn1|8 years ago|reply
It's now being used by tons of different companies, including Google competitors like Microsoft Azure.
Disclosure: I'm the executive director of CNCF.
[+] [-] manigandham|8 years ago|reply
The difference is between open (which gRPC is) and proprietary. The origination doesn't really matter. Lots of great open tech has come from Google, Microsoft, Apple, Amazon, Facebook, Netflix, Github, etc. Almost all the big projects started at a big company that needed to get something done and had the resources to create something new.
I'd rather the industry pick something and actually standardize instead of reinventing the same thing repeatedly just for some philosophical reasons.
[+] [-] CSDude|8 years ago|reply
[+] [-] detaro|8 years ago|reply
Google's influence in many parts is a problem, but people using an internal protocol they've made up is basically irrelevant IMHO, unless you have a really good argument why it is a problem.
[+] [-] maltalex|8 years ago|reply
What would you consider "good" or "modern"? JSON?
[+] [-] colordrops|8 years ago|reply
[+] [-] pkaye|8 years ago|reply