top | item 16612580

Announcing gRPC Support in Nginx

384 points| tex0 | 8 years ago |nginx.com | reply

80 comments

order
[+] anameaname|8 years ago|reply
Finally! Up until now, when people ask how they are supposed to proxy grpc traffic, we could only recommend Envoy. Pretty much no one wants to hear that they have to change their stack to use new technology. Since a large part of the world is already on nginx, this was a a real barrier for adoption.

Next up, browser support?

[+] rubiquity|8 years ago|reply
I remember seeing that Nginx has TCP proxying as well. Couldn’t that be an option?
[+] jacques_chester|8 years ago|reply
My hunch is that the impetus was largely because of this kind of conversation.
[+] Matthias247|8 years ago|reply
If gRPC would have been designed slightly different, they could have had good proxy support AND browser support right from the start.

E.g. it's already based on top of HTTP(/2), and uses normal path for distinguishing methods, which would actually be a good prerequisite to make it work everywhere. But then OTOH it uses barely support HTTP features like trailers, which require very special HTTP libraries and are not universally supported. If the status codes there would have been implemented as just another chunk of the HTTP body, and if some other small changes had been done, we could have had gprc from browsers already a long time ago. I guess that's what grpc-web now tries to fix, but I haven't dug into that in detail.

[+] anameaname|8 years ago|reply
For the record, the reason grpc uses trailers is because it uses http/2, not the other way around. It was expected that since the whole transport was completely new, adopters of http/2 would add trailer support. As it turns out, they mostly didn't. Particularly Firefox and Chrome did not expose trailers. This is even despite being part of the new Fetch API.
[+] mratzloff|8 years ago|reply
I'm using grpc-web in a service that's going live soon. It works great.
[+] andrewstuart|8 years ago|reply
OK so what are good use cases for gRPC? What problem does it solve, and in what contexts should I be reaching for gRPC?
[+] dbmikus|8 years ago|reply
If you want to have a set of globally defined types and/or language-independent types to share between your various programs or services, gRPC and Protobufs are a good option.

Also, anywhere that you might use RPC you could use gRPC. It has a compact wire format and is pretty user-friendly as far as designing your RPC req/rep types.

[+] ericjang|8 years ago|reply
I used gRPC for numerous hobby projects during my undergrad to glue together binaries running in different languages (e.g. a simulation server running in C++ and a scripting client in Python). By passing around a shared data structure (Protobufs), one does not need to waste time writing serialization/de-serialization adapters. It is also useful for gluing together microservices.

FB's Thrift also solves the same problem, and is an alternative to gRPC.

[+] Thaxll|8 years ago|reply
Everything where you would use REST ( http / json ) but where the client is not a browser.
[+] awinder|8 years ago|reply
Latency-sensitive / chatty microservices can benefit greatly. Some of this is by nature of http2 but it’s extended by protocol buffer packaging of messages and other client smarts. Inter-service comms is where this popped onto my radar recently.
[+] _skel|8 years ago|reply
You get type safety in your API, you get autogenerated client code, and you get http2 out of the box.

Personally I find the autogenerated client code to be the biggest upside. Anyone who wants to use your API, in any language supported by the RPC, can start doing it with very little work. Gone are the days of maintaining officially-supported client libraries.

[+] theshrike79|8 years ago|reply
Low latency and/or low bandwidth data transfer in M2M communication, especially when the client and servers are done in different languages.
[+] toprerules|8 years ago|reply
I love seeing grpc grow. An rpc system with a schema and code generation is a must for internal services. Grpc has worked really well for me.
[+] adamkl|8 years ago|reply
No disrespect intended, but I find this comment pretty funny. SOAP/XML has been exactly this for 20 years. It definitely has some major warts, but gRPC isn’t doing anything new.
[+] perfmode|8 years ago|reply
Seems they punted on graceful handling of bidirectional streams.
[+] whyrusleeping|8 years ago|reply
last i checked grpc could only technically support bidirectional streams. None of the libraries I looked at actually implemented it.
[+] awinder|8 years ago|reply
Does this mean anything for http2, specifically anything for support for http2 upstreams? I would imagine that’s was a necessity to support for grpc so any way that will come to generic http2 as well?
[+] chuckdries|8 years ago|reply
Anyone have a good ELI5 of gRPC? It says it's a fast RPC implementation, but all the explanations of RPC seem very in-the-weeds.
[+] adrianmonk|8 years ago|reply
Not sure if you're looking for an explanation of RPC in general or just specifically how gRPC does it, but I guess I'll kind of cover both.

You define a series of set method calls, using a custom language. Each method call has a single message as its request and another message as its response. (You can actually get fancier than this, but you usually don't.)

In gRPC, the messages are usually protocol buffers (though other formats are supported like JSON). The method calls are organized into groups called services.

You stick these definitions in a file, then run a tool that takes these definitions and generates code in your desired language (Java, Python, etc. -- gRPC supports many languages). This code allows you to build objects that will get turned into protocol buffers wire format and sent across from client to server and back.

So for example, if you define a method Foo that takes a FooRequest and returns a FooResponse, you would put this a definition file, run a tool that generates some code. For the sake of this example, we'll say you're using Java for everything, so you tell the tool to generate Java code. This generated Java code would include code to create a FooRequest object, set values in it (strings, ints, etc.). It would also include a Java method you can call that takes your FooRequest and sends it to the server and that gives you back a FooResponse after the server responds. On the server side, you also get Java code that is generated to help you respond to this request. Your Java code on the server side will receive a FooRequest, and it can use generated Java code to read the fields out of it (those same strings, ints, etc.), and then it can build a response in the same way that the client built the request.

On the client, there is obviously some work involved in opening connections to the server, converting the FooRequest into wire-format data (and vice versa for FooResponse), but that is done for you, and you just need to tell it the server's address. On the server, there is work involved in listening for connections from clients, figuring out which RPC method is being called and routing it to the right Java method, converting the wire-format data into objects (and vice versa), but all that is done for you, and you just need to tell it what port to listen on.

gRPC itself uses HTTP/2 and makes POST calls when your client calls a method. The methods and services you define are mapped to URLs. So if you define a Bar service with a Foo method inside, it will be turned into /Bar/Foo when the HTTP call is made.

[+] compsciphd|8 years ago|reply
short: protobuf based (so relatively language agnostic) rpc mechanisms that communicates over http2

slightly longer: one writes a protobuf that gets compiled into a language specific server and client code. All you have to do is implement the server functions or call the generated client functions to make rpc calls.

[+] grizzles|8 years ago|reply
This is great; I also recently published an alternative designed for browser clients: https://github.com/ericbets/danby

It's designed to be very simple to configure. It doesn't support streams yet, but it should soon.

[+] brunosutic|8 years ago|reply
This is great! TL;DR: instead of building JSON or GraphQL API now you can easily expose your gRPC service to the outside world!

We use gRPC in my company. We're happy but some things were not easy or straightforward to implement. With this update nginx makes load balancing and authentication easier to implement.

[+] tango12|8 years ago|reply
I was actually just about to say that. There seems to be fair bit of overlap in graphql and grpc from the Codegen and Schema POV. Graphql is more intended to be a data query language and grpc more generic function calls.

But they’re both essentially solving the dev problem of having ‘typed contracts’ right?

Very interesting evolution! SOAP to REST to graphql more popular on the ‘frontend’ side and grpc on the ‘backend’ side.

[+] merb|8 years ago|reply
from my understanding this only works over h2 or h2c
[+] sigmonsays|8 years ago|reply
haproxy has had http/2 supports beginning 1.6 ( May 2015) however I have not yet looked into using it. I do wonder if it posses the same features in terms of inspecting the method names on a per request basis.
[+] wolfspider|8 years ago|reply
congrats to the Nginx team and contributors! some of us have been waiting anxiously for this release- this will be great!
[+] throwawaysunday|8 years ago|reply
This is not cool

There is no place for gRPC in NGINX.

This is Google trying to thin end of the wedge their own proprietary protocols into web standards yet again.

My idea of a good time is not a future where the internet is built using Google technologies dressed as "open technologies".. that.. uh-huh just happen to be the exact same as infrastructure protocols that span the internal Googleverse.

Besides that, protobuf and its ilk aren't even good or modern.

People who say, yay look at it growing are very naive imho

[+] masklinn|8 years ago|reply
[+] manigandham|8 years ago|reply
What are better options?

The difference is between open (which gRPC is) and proprietary. The origination doesn't really matter. Lots of great open tech has come from Google, Microsoft, Apple, Amazon, Facebook, Netflix, Github, etc. Almost all the big projects started at a big company that needed to get something done and had the resources to create something new.

I'd rather the industry pick something and actually standardize instead of reinventing the same thing repeatedly just for some philosophical reasons.

[+] CSDude|8 years ago|reply
There might be some problems with it, it is not perfect, however it is already open, not bound to Google and we used in internal projects, and multiple very large projects also use it freely. I really like the strong typing and the ability to bi-directional streams and finally code generation for various languages make it very okay. Our main product was Go, but a part had to be Java and the integration was very easy because we used gRPC. Not that it cannot be achieved by other tools and frameworks, gRPC is already popular and well performing enough for most people.
[+] detaro|8 years ago|reply
"Not cool" is telling the long-term maintainers of software what "has no place in their software" and not to fulfill user/customer requests because that'd be helping a company you don't like.

Google's influence in many parts is a problem, but people using an internal protocol they've made up is basically irrelevant IMHO, unless you have a really good argument why it is a problem.

[+] maltalex|8 years ago|reply
> Besides that, protobuf and its ilk aren't even good or modern.

What would you consider "good" or "modern"? JSON?

[+] colordrops|8 years ago|reply
Just out of curiosity, what's a modern alternative to protobufs?
[+] pkaye|8 years ago|reply
Is Google behind this integration effort?