top | item 12186832

Protocol Buffers v3.0.0 released

278 points| Rican7 | 9 years ago |github.com

123 comments

order

amluto|9 years ago

They added a feature that impressively fails to interoperate with the rest of the world.

> Added well-known type protos (any.proto, empty.proto, timestamp.proto, duration.proto, etc.). Users can import and use these protos just like regular proto files. Additional runtime support are available for each language.

From timestamp.proto:

  // A Timestamp represents a point in time independent of any time zone
  // or calendar, represented as seconds and fractions of seconds at
  // nanosecond resolution in UTC Epoch time. It is encoded using the
  // Proleptic Gregorian Calendar which extends the Gregorian calendar
  // backwards to year one. It is encoded assuming all minutes are 60
  // seconds long, i.e. leap seconds are "smeared" so that no leap second
  // table is needed for interpretation.
Nice, sort of -- all UTC times are representable. But you can't display the time in normal human-readable form without a leap-second table, and even their sample code is wrong is almost all cases:

  //     struct timeval tv;
  //     gettimeofday(&tv, NULL);
  //
  //     Timestamp timestamp;
  //     timestamp.set_seconds(tv.tv_sec);
  //     timestamp.set_nanos(tv.tv_usec * 1000);
That's only right if you run your computer in Google time. And, damn it, Google time leaked out into public NTP the last time their was a leap second, breaking all kinds of things.

Sticking one's head in the sand and pretending there are no leap seconds is one thing, but designing a protocol that breaks interoperability with people who don't bury their heads in the sand is another thing entirely.

Edit: fixed formatting

justinsaccount|9 years ago

It's interesting that you refer to a huge amount of planning and engineering as "sticking your head in the sand".

https://googleblog.blogspot.com/2011/09/time-technology-and-...

I think that the approach everything else uses is the "sticking your head in the sand approach". You basically pretend that there is no problem and that time is perfectly accurate, up until you have a minute with 59 or 61 seconds.

Just because suddenly trying to handle "Oh shit, everything is off by an entire second!" is the approach everything else uses doesn't mean it is the right approach.

wongarsu|9 years ago

It's fine as a timestamp implementation, and great for many uses. But I think a big problem with the documentation. They start off by saying it's "at nanosecond resolution in UTC Epoch time", and then they go on to explain how it uses a completely different encoding that is neither compatible with UTC nor with TAI (atomic time which ignores leap seconds). And then they jump ahead to sample code which again pretends that the timestamp is UTC.

No matter whether you like "google time" or not, this is horrible documentation. They are glossing over an issue which should be marked with big red letters.

haberman|9 years ago

The question of how to reconcile leap-second-smearing systems with other systems is an interesting and important one. I'm not sure that timestamp.proto changes this issue: prior to timestamp.proto systems would still communicate using UNIX time (smeared or non-smeared) using plain integer or double seconds. timestamp.proto just provides a structure for storing UNIX time with greater range and precision than a single integer or floating point number can provide.

What I'm trying to say is that I think this is a smearing systems vs. non-smearing systems issue, and not so much a timestamp.proto issue. timestamp.proto mentions smearing but really it's just a vehicle for storing the seconds/nanos from the system clock, with whatever semantics that system clock uses. Because in practice systems don't give you access to both the smeared and non-smeared values; you get whatever the system gives you. The remarks about being leap-second-ignorant apply whether the leap second is being smeared or repeated.

Google implemented leap-second smearing in 2011, before the big push towards cloud. So the need to communicate sub-second timestamps between internal Google systems and external systems was probably not so much on people's minds. But these days we're releasing a bunch of APIs, and sub-second timestamps might become a more important issue for some of them.

So I think this issue is worth discussing further, and I opened an issue on GitHub to track it: https://github.com/google/protobuf/issues/1890

Thanks for the feedback.

jschwartzi|9 years ago

This is only an issue if you use the Timestamp to represent a human-readable time. There are more uses for timestamping than for display to a human operator. For example, one might use a timestamp in a software system to detect the passage of time, as in the use of a monotonic clock. In a real-time system you would ignore the presence of leap seconds because you will never examine the timing of your system relative to a Gregorian calendar. Rather, you just want to make sure that the station-keeping engine on your satellite burns for exactly 250 milliseconds, and leap seconds are of no use in that application.

icedchai|9 years ago

It's a serialization format containing seconds and microseconds. You can put whatever you want in there, including true (non-Google) UTC time, right? This seems more like a documentation problem than an actual problem with Protobuf.

jhspaybar|9 years ago

It saddens me that this is the top comment. It's complete and total FUD unrelated in any way to what Proto is, and to boot, it's an optional type, provided if you want it, but otherwise not forced to be used in any way! Scroll down the page for much more worthwhile discussions of Proto.

cbsmith|9 years ago

There's really no reason you can't provide your own timestamp structure, or your own timestamp transformation logic...

lmm|9 years ago

I'm glad they're willing to break compatibility to push their approach, because I think it's a better one. UTC with leap seconds is the worst of all possible worlds - not suitable for human time, not suitable for system time either - as perennial leap second bugs in such high-profile projects as the linux kernel demonstrate. Everyone seems to have agreed for years that basing system time on something without leap seconds would be better - whether that be leap smears or TAI - but no-one bothers to take action.

brazzledazzle|9 years ago

Regarding the leaking of NTP, are you talking about Systemd's default pointing at Google's NTP servers or some other event?

madgar|9 years ago

> designing a protocol

It's not a full protocol. It's a data type for a serialization library. You can write your own data types and they serialize just as well as the built-in types.

> that breaks interoperability

Wait, what was "broken" here? What was working before that isn't with this new release? What does this inclusion of a utility data type in a serialization library break that previously was intact?

zxv|9 years ago

Does this depend on use of Google's time servers?

The dependence on "smeared" leap seconds sure sounds like a dependence on such a time server.

Ouch.

Nullabillity|9 years ago

I can see caring about leap seconds right now, but a few seconds back or forth in the past probably won't matter very much.

zellyn|9 years ago

- removing optional values is actually quite nice. In practice, I end up checking for "missing or empty string" anyway.

- the "well-known types" boxed primitive types essentially add optional values back in. And depending on your language bindings, may look the same.

- extensions are still allowed in proto3 syntax files, but only for options - since the descriptor is still proto2. It seems odd to build a proto3 that couldn't represent descriptors.

- I still don't understand the removal of unknown fields. Reserialization of unknown fields was always the first defining characteristic of protobufs I described to people. I actually read many of the design/discussion docs internally when I worked at Google, and I still couldn't figure this one out. Although it's certainly simpler…

- Protobufs are the "lifeblood" (Rob Pike's words) of Google: the protobuf team is working to get rid of significant Lovecraftian internal cruft, after which their ability to incorporate open source contributions should improve dramatically.

tantalor|9 years ago

> removing optional values

Slight correction: optional values are not removed. Quite the opposite; the "optional" keyword is removed because now all fields are optional. It is actually required values which were removed.

teacup50|9 years ago

> - removing optional values is actually quite nice. In practice, I end up checking for "missing or empty string" anyway.

I feel the opposite; this greatly reduces the utility of protobuf.

Previously, I could trust that if parsing succeeded, then I had a guarantee of a populated data structure.

Now, I have to check each field individually, in manually written code, to verify that no required fields are missing.

That's really lame, and a huge step backwards.

rdtsc|9 years ago

How does this compare or in general why would you pick this vs newer formats like Cap'n'proto or FlatBuffers?

From FlatBuffers overview I see this comparison:

---

Protocol Buffers is indeed relatively similar to FlatBuffers, with the primary difference being that FlatBuffers does not need a parsing/ unpacking step to a secondary representation before you can access data, often coupled with per-object memory allocation. The code is an order of magnitude bigger, too. Protocol Buffers has neither optional text import/export nor schema language features like unions.

---

So are the newer ones useful mostly when serialization vs deserialization speed matters (https://google.github.io/flatbuffers/) ?

cbsmith|9 years ago

Also when you want to memory map a file/have live objects in shared memory, or in general have your in-memory & serialized structures be the same.

jokoon|9 years ago

I don't know, but I tried using protocol buffer once for mapbox vector files, the resulting C++ header was huge. It had templates and all sort of things, something like more than 1000 lines.

jackmott|9 years ago

Cap'n'proto is more or less abandoned I believe. But it and the flatbuffer approach gives very fast serialization and deserialization speed (essentially takes 0 times) but you pay a cost when you later access data, because it extracts the values you need on demand from the raw bytes.

I'm not sure it would often make much sense overall.

JoachimSchipper|9 years ago

This looks like a nice evolution.

It's a pity that the "deterministic serialization" gives so few guarantees; I have worked on at least one project that really needed this.

(Basically, we wanted to parse a signed blob, do some work, and pass the original data on without breaking the signature; unfortunately, this requires keeping the serialized form around, since the serialized form cannot be re-generated from its parsed format.)

pherl|9 years ago

The main concern that the deterministic serialization isn't canonical is due to the unknown fields. As string and message type share the same wire type, when parsing an unknown string/message type, the parser has no idea whether to recursively canonicalize the unknown field.

The cross-language inconsistency is mainly due to the string fields comparison performance, i.e. java/objc uses utf16 encodings which has different orderings than utf8 strings due to surrogate pairs.

Feel free to start an issue on the github site asking for canonical serialization with your use case. We may change the deterministic serialization with stronger guarantee (e.g. cross language consistency) or add another API for canonical serialization.

cbsmith|9 years ago

In a trusted system, if you don't trust the structure you are working with, why would you trust the signature?

I'd want to always work from the signed blob.

That said, this is one reason to use flatbuffers/capt'n proto I guess: you don't have to worry about this since you never unpack the blob.

jalfresi|9 years ago

"The main intent of introducing proto3 is to clean up protobuf before pushing the language as the foundation of Google's new API platform"

Does anyone know if this means Google's public APIs will be proto3 based? I quite like protobufs.

manish_gill|9 years ago

If someone better informed than me can please explain - where and why would something like Protocol Buffers be useful?

wwalser|9 years ago

Imagine working on a team that wants to move quickly but whose output is both a product and an API that's consumed by multiple other teams. The product you are building uses said API, but so do other teams. Your code needs to be stable enough to support these other teams needs (an API which doesn't change under them) but you also want to be able to make changes to your own application quickly, thus needing to change the API regularly.

A reasonable move is to version said API and have an ops team that ensures that all in-use versions of the API stay running. Some consumers will be on the bleeding edge, your team's application for example while others will lag behind.

Using proto* in this case is a reasonable move because you gain multiple benefits, performance being perhaps the least important in this case. Having a defined schema for your API provides some level of natural documentation for the API. Code generation allows your team to publish trusted client libraries for multiple languages.

I'll specifically call out client libraries since I've seen it make a dramatic difference in organizational efficiency, mostly to do with team to team trust levels. Without a client library the testing situation becomes a significant burden, read up on contract testing. When the team that's publishing an API also creates the client that most directly calls that API, the client library is the testing surface instead of every consumer of the API needing to test the API itself for regressions.

zellyn|9 years ago

We use them internally at Square for our RPC mechanism ("Sake", similar to "Stubby", Google's internal RPC mechanism), for our Kafka-based logging/metrics/queue infrastructure, and for defining external JSON APIs. We're in the process of switching from Sake to GRPC, which also use Protobufs as their payload format (although you can sub in different transports).

dkopi|9 years ago

> Protocol buffers are Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data – think XML, but smaller, faster, and simpler. You define how you want your data to be structured once, then you can use special generated source code to easily write and read your structured data to and from a variety of data streams and using a variety of languages.

From https://developers.google.com/protocol-buffers/

sigil|9 years ago

I'll give you an example.

I used protobuf as the output format for a web crawler. Workers read urls and sequentially write entire HTTP responses to disk. [0] Sure, you could serialize the responses to JSON, but the overhead of representing things like binary image data as escaped unicode strings was prohibitive in my case.

"Why not BSON?" Well, schemas can be nice when performance matters. Instead of solving a parsing problem at runtime, a C/C++ reader can contain a compiler-optimized deserializer for a given protobuf schema. It's almost like directly reading and writing an array of C structs, except protobuf is architecture-independent, and you can add new fields without breaking old readers.

There are plenty of reasons to not use protobuf. I particularly disliked the code generation step for C/C++. That makes even less sense in a language like Python, and yet that's exactly what the official python protobuf implementation from Google does (did?). I wrote a python protobuf library on top of a C protobuf library that avoids codegen: https://github.com/acg/lwpb

[0] See the ARC format used by the Internet Archive for a similar (and imo clunkier) solution. http://crawler.archive.org/articles/developer_manual/arcs.ht...

phamilton|9 years ago

For me there are three main advantages: schema, performance and code generation.

Having a strict schema makes it a lot easier to maintain applications in a distributed system. Parsing protobuf is much faster than something like JSON. The multitude of code generators for protobuf make it really simple and easy to use multiple languages on the same data structures.

lordnacho|9 years ago

I used it in a trading system because it's a compact scheme for sending data across networks. It's also quite fast, and there's support for various languages. So you can have a feed handler blasting out prices using a c++ implementation, with a GUI drawing a chart written in c#.

arnarbi|9 years ago

Serializing data for RPC, network protocols or storage, description and serialization of configuration, serializable state, serializing complex types for cryptographic signing, etc.

Why is it useful? The schema both documents the data structure and allows mappings to natural APIs in many different languages. Parsers and encoders are generated for you, and are fast.

NikhilVerma|9 years ago

At Badoo we use them to have a unified API for all of our platforms (Web, Mobile Web, Android, iOS, Windows Phone etc). This would not have been possible without something like ProtoBuf.

nawitus|9 years ago

When JSON is not fast/lightweight enough.

gonyea|9 years ago

Shocking! Google's started supporting more languages than just the ones they care about. I really hope this signals the death of their disdain culture.

Being a worthwhile Cloud provider means hiring experts in all sorts of languages and supporting their efforts.

Imagine a world where Google didnt just "support node" (YEARS late), but actually turned their v8 expertise into a Cloud product.

But that'd involve convincing Java-devs-turned-VPs to care about JavaScript, <2004>and EVERYONE knows that JavaScript is a terrible language.</2004>

skybrian|9 years ago

Sadly the JSON format they chose isn't actually suitable for high-performance web apps. Web developers who use protobufs will continue to get by with various nonstandard JSON encodings.

positr0n|9 years ago

Why isn't is suitable? (I've never used protobufs)

detaro|9 years ago

What characteristics of a JSON format would be important?

the_duke|9 years ago

Why would you use JSON in a high performance context anyway?

mattiemass|9 years ago

Wow, this seems to address a bunch of problems I've experienced with protobuf in the past. Looks awesome!

grosbisou|9 years ago

Could you expand on the problems you encountered?

forrestthewoods|9 years ago

Google also has flatbuffers. I wonder if flatbuffers is being used by enough developers to justify significant development?

https://github.com/google/flatbuffers

IshKebab|9 years ago

I think it's more that GRPC (Google's RPC-over-HTTP2 protocol) directly supports Protobuf, and not Flatbuffers. All of Google's Cloud APIs use Protobuf (for example the [Speech API](https://cloud.google.com/speech/reference/rpc/) ).

I have to say, GRPC is pretty great. It's statically typed, supports loads of languages, the interfaces are simple to define (basically Protobuf), and it supports streaming requests! Most RPC systems omit that, or only have message streams (e.g. MQTT). Good RPC systems need both.

The only downside I find is that it is rather complicated (in design; not use).

alfalfasprout|9 years ago

Been using flatbuffers in production for a high speed market feed for a month now. Love it. Decode/encode time is absurdly fast (~1-2 microseconds for a small to medium schema). If you're pushing 50k+ events/second it can be a great choice. Takes up almost no space on the wire too.

zbjornson|9 years ago

> primitive fields set to default values (0 for numeric fields, empty for string/bytes fields) will be skipped during serialization.

I don't totally understand this. Presumably during deserialization they will be set to defaults and not missing? Otherwise, coupled with the removal of required fields, it seems impossible to actually send a 0-value number or empty string, or to send a proto without a field and not have it set to 0 or "" (have to explicitly null the field?).

prattmic|9 years ago

Within the API, proto3 does not have the concept of field presence. All fields are "present" and default to their type's zero value.

Since the client can handle this, there is no need to explicitly serialize default values.

blt|9 years ago

I was hoping for packed serialization of non-primitive types. I once used Protobuf to serialize small point clouds, and ended up needing to serialize them as a packed double array and reconstruct the (x, y, z) structure at read time to avoid Protobuf malloc'ing each point individually. Not a huge deal, but it would be a real pain for more complex types.

andrewmcwatters|9 years ago

Could someone explain to me why you would use Protocol Buffers, Cap'n Proto, etc versus rolling your own type-length-value protocol besides API interop?

What if your team could write a smaller TLV protocol, and it was necessary to keep your codebase small? Would this not be wise? Are Protobufs and party not comparable to TLV protocols?

euyyn|9 years ago

In the vast majority of cases, you want your team to spend their time doing something other than reinventing protos, debugging the in-house implementation, maintaining the library, etc.

It's not clear to me anyway how doing it yourself would help keeping your codebase small vs using protos. In terms of code to maintain, doing it yourself is a net loss. In terms of binary size and method count, the proto libraries for Objective-C and Android are optimized like crazy.

wehadfun|9 years ago

In C# why use Protocol Buffer over the XML or binary serializes?

klodolph|9 years ago

The C# binary serializer is not really comparable in terms of what it does. It's more like Python's Pickle library.

http://stackoverflow.com/questions/703073/what-are-the-defic...

C# binary serialization is only useful in certain circumstances. It doesn't work outside the .NET world and it even has compatibility problems within the .NET world—you can break deserialization by making certain changes to your code. From the Microsoft documentation:

> The state of a UTF-8 or UTF-7 encoded object is not preserved if the object is serialized and deserialized using different .NET Framework versions.

(From https://msdn.microsoft.com/en-us/library/72hyey7b(v=vs.110)....)

Also see https://msdn.microsoft.com/en-us/library/ms229752(v=vs.110)....

recursive|9 years ago

Your message will be about 5% the size of the xml one, and it will be backwards compatible, unlike the built-in binary serializer.