top | item 16233325

gRPC-Go Engineering Practices

140 points| tgma | 8 years ago |grpc.io | reply

82 comments

order
[+] rb808|8 years ago|reply
Its a good time to ask, is gRPC any good? I'd love to standardize on stable middleware layer that handles multiple versions of clients and servers well. Rest with json really seems to work great for most things already.

What is the advantage of gRPC - just more efficient?

[+] kajecounterhack|8 years ago|reply
Versioning your API is one huge benefit and better done using proto/rpc. If you change your Json schema, have fun propagating that change to all clients without fear. Or hope you've built special infra to do that.

Also "just more efficient" is a funny way to characterize the performance difference between just data bytes vs data + structure bytes (read: the gap is large). You gain in transmission and you gain during deserialization / parsing.

Here is an example.

{"My key":"my value"} has n=21 characters. When you parse you must scan the whole 21 characters O(n), every time, just to read the thing.

If you instead store this in fixed size bytes, where you have some fixed # of bytes that tell you "my value starts at address 0x43", then you can skip to just the values you care about. You don't need brackets or quotes. And you can use other nifty tricks to compress the binary representation further for savings on the wire.

[+] willvarfar|8 years ago|reply
Afraid I'm going to be contrary and old-fashioned and say I prefer JSON.

Its never been problematic adding or extending JSON endpoints and its never been a problem using basic gzip compression on the fly either.

And JSON endpoints are a damn sight easier to debug and wireshark and all the rest.

I've spent a lot of time writing fast JSON serialization for various languages including Java etc; its staggering how inefficient most libraries are. But that's not really JSON's fault.

[+] cube2222|8 years ago|reply
If you're doing microservices it's getting kinda boring to write all your client libraries, gRPC generates your client code, which saves time.
[+] Thaxll|8 years ago|reply
Also streaming that you can't do with regular REST, so think about push notification and the like.

One of the biggest advantage imo is the contract between the client and the server, both are always in sync about what to send / receive.

I've seen many times things break because x,y,x added a field or change a type that the server / client couldn't understand.

[+] mattb314|8 years ago|reply
re: efficiency, I think this blog post does a decent job explaining: https://auth0.com/blog/beating-json-performance-with-protobu.... Basically, it doesn't make a huge difference if you're communicating with a Javascript endpoint, but Java to Java (or probably between other non-js backends) you can save a lot of time on serialization/deserialization. It's worth noting that the post uses fairly large blobs (50k people and addresses), and you probably won't see as big of a difference on smaller requests.
[+] dhavalshreyas|8 years ago|reply
Biggest benefits I've seen so far. Codegen, streaming connections, API versioning.
[+] dmayle|8 years ago|reply
Json is a serialization format. gRPC is both a serialization format and a DDL (data definition language).

That means that you are storing your schema, which also happens to contain interoperability features.

...and the serialization format is more efficient.

[+] zimbatm|8 years ago|reply
Most of the comments here focus on Protobuf but the other half of gRPC is HTTP/2, and HTTP/2 is very complex.

Good luck implementing a half-decent HTTP/2 client or server library if your language doesn't support it already. This is easily a 6 months job.

The other issue with HTTP/2 is that most client libraries require TLS to work. Development environments become harder to setup. It becomes harder to sniff the traffic during debugging. In client/server scenarios where both are on the same machine this also creates unnecessary overhead.

[+] Inflatablewoman|8 years ago|reply
My only regret is that when we started our current project, I did not commit 100% to gRPC, so now we have a mix of services. If I had of gone all in, it would be easier to integrate upcoming things like conduit [1], and I would not have to generate Swagger files but could just ship the proto files.

[1] https://conduit.io/

[+] macrael|8 years ago|reply
It would be an ideal standard if it supported browsers.
[+] bruth|8 years ago|reply
Slightly off-topic, but related.. I have read that a common practice for managing proto files (or any schema definitions really) is to put them in a separate repo/package to share. It seems pretty straightforward in my head and provides several advantages. However I still ask about any trade-offs when doing this in practice?
[+] doh|8 years ago|reply
It has similar challenges than a monorepo.

First challenge is, that you have to keep some kind of reference on what proto are you using within your project. What Google (and some others) do, is that they a) put all the proto files in a separate repo [0] and then generate them for each language separately (python [1], ...). This way you can use whichever proto file you need within your project, however you have to load more libraries than you need to. To be honest, it only makes slight difference when deploying, so not too bad.

The second challenge is, that you have to generate the result files every time you make some kind of change. If you have a lot of proto files, then it may take some time to generate them and there are very little tools available to help you. Google open-sourced Artman [2] although it's more focused on APIs than managing shared protos.

The massive advantage is that, because proto files are self-explanatory and if you put enough information in them can function as direct documentation of your API's interface, you don't need to fish out the requirements in the project or in the documentation but rather just directly read the proto file itself. But this does depend on the developers, to make it as consistent as possible, which is not always the case [3].

[0] https://github.com/googleapis/googleapis

[1] https://pypi.python.org/pypi/googleapis-common-protos

[2] https://github.com/googleapis/artman

[3] https://news.ycombinator.com/item?id=16166153

[+] jeffrand|8 years ago|reply
Great to hear, I'm pretty bullish on the framework and have been using it happily in Go for a while
[+] willvarfar|8 years ago|reply
There has been a flurry of gRPC posts on HN recently - it must be the new xml soap rest NoSQL fad!

NoSQL is an interesting parallel - Google publish map reduce and bigtable and amazon publish some influential papers and suddenly everyone is using NoSQL in order to be "web scale". Then it turns out that Google themselves were doing sql web scale and spanner and all that.

There's a risk that gRPC is the same? In chat yesterday ex-googlers said that Google was increasingly moving over to flatbuffer...

Personally I have an aversion for tools with generators. Harks back to the damage CORBA did me I guess... I also have a preference for plaintext eg JSON - so much easier to debug.

Oh well. Guess we're in the fashion business ... ;)

[+] j_s|8 years ago|reply
Is this gRPC the same thing as golang net/rpc referred to here: https://news.ycombinator.com/item?id=16170116? I don't think so but I've never used either one.

>seniorsassycat: I don't understand why AWS released Go support instead of binary support and I don't understand why they chose to rely on go's net/rpc [...] which encodes objects using Go's special [gobs] binary format

[+] cube2222|8 years ago|reply
net/RPC is a rpc implementation in the go standard library, which uses gob for serialization.

gRPC is a protocol and set of libraries for cross-language rpc based on protobuffs. Also doing a lot of codegen for you, like generating clients.

[+] dguaraglia|8 years ago|reply
No, this is an RPC and streaming framework built on top of Protobuf and HTTP/2. It's pretty much an open source version of libraries that Google uses pretty much everywhere internally.
[+] mehrdada|8 years ago|reply
No I believe gRPC is a different thing. Don't know what net/rpc is.
[+] virmundi|8 years ago|reply
Has anyone found a good resource on using gRPC directly in a JS client? I've looked at using gRPC. My current challenge is that I want to support a website/webgateway on one side and mobile gateway on the other. If I use Swift or Java on the mobile side, it's easy. If I use Ionic Framework, I'm in the same spot as with the web gateway; probably better off with HTTP + RPC.
[+] quietbritishjim|8 years ago|reply
It is possible to use gRPC directly from JavaScript using the gRPC-Web [1] project by Improbable. More accurately, it uses TypeScript, which is generated from the proto file in a similar way to other languages. You still need a proxy to transform requests from HTTP/1.1 to HTTP/2.0.

I've actually only used the gRPC JSON gateway mentioned in the other replies so I'm not sure how it compares, but it looks interesting.

[1] https://github.com/improbable-eng/grpc-web

[+] qsymmachus|8 years ago|reply
Does anyone have experience with both gRPC and Thrift? I'd be curious to know how they compare.
[+] kajecounterhack|8 years ago|reply
Thrift used to support more languages (this has changed). gRPC was more performant (take w/ grain of salt, this is word of mouth) for a while -- unclear if it's changed or if the difference was ever that significant except at very high scale.

I think they're pretty similar and you can't lose either way. Facebook's support of Thrift and Google's of gRPC make both decent options.

One thing I will say about gRPC is that it plays nice with Google's build system (Bazel) and some Google APIs now have first-class gRPC support. If you choose thrift in your stack you'll have to call APIs using JSON or support gRPC anyway if you want to use them for those API calls...so gRPC might be an attractive choice. Furthermore gRPC's go interop is also excellent if you happen to be a fan of golang.

[+] Yeroc|8 years ago|reply
I've used Thrift on a project in the past and one feature gRPC has that Thrift doesn't is the ability to use streaming semantics. This would have actually been very useful for the project I used Thrift on. If I were implementing it now I'd definitely use gRPC.