They added a feature that impressively fails to interoperate with the rest of the world.
> Added well-known type protos (any.proto, empty.proto, timestamp.proto, duration.proto, etc.). Users can import and use these protos just like regular proto files. Additional runtime support are available for each language.
From timestamp.proto:
// A Timestamp represents a point in time independent of any time zone
// or calendar, represented as seconds and fractions of seconds at
// nanosecond resolution in UTC Epoch time. It is encoded using the
// Proleptic Gregorian Calendar which extends the Gregorian calendar
// backwards to year one. It is encoded assuming all minutes are 60
// seconds long, i.e. leap seconds are "smeared" so that no leap second
// table is needed for interpretation.
Nice, sort of -- all UTC times are representable. But you can't display the time in normal human-readable form without a leap-second table, and even their sample code is wrong is almost all cases:
That's only right if you run your computer in Google time. And, damn it, Google time leaked out into public NTP the last time their was a leap second, breaking all kinds of things.
Sticking one's head in the sand and pretending there are no leap seconds is one thing, but designing a protocol that breaks interoperability with people who don't bury their heads in the sand is another thing entirely.
I think that the approach everything else uses is the "sticking your head in the sand approach". You basically pretend that there is no problem and that time is perfectly accurate, up until you have a minute with 59 or 61 seconds.
Just because suddenly trying to handle "Oh shit, everything is off by an entire second!" is the approach everything else uses doesn't mean it is the right approach.
It's fine as a timestamp implementation, and great for many uses. But I think a big problem with the documentation. They start off by saying it's "at nanosecond resolution in UTC Epoch time", and then they go on to explain how it uses a completely different encoding that is neither compatible with UTC nor with TAI (atomic time which ignores leap seconds). And then they jump ahead to sample code which again pretends that the timestamp is UTC.
No matter whether you like "google time" or not, this is horrible documentation. They are glossing over an issue which should be marked with big red letters.
The question of how to reconcile leap-second-smearing systems with other systems is an interesting and important one. I'm not sure that timestamp.proto changes this issue: prior to timestamp.proto systems would still communicate using UNIX time (smeared or non-smeared) using plain integer or double seconds. timestamp.proto just provides a structure for storing UNIX time with greater range and precision than a single integer or floating point number can provide.
What I'm trying to say is that I think this is a smearing systems vs. non-smearing systems issue, and not so much a timestamp.proto issue. timestamp.proto mentions smearing but really it's just a vehicle for storing the seconds/nanos from the system clock, with whatever semantics that system clock uses. Because in practice systems don't give you access to both the smeared and non-smeared values; you get whatever the system gives you. The remarks about being leap-second-ignorant apply whether the leap second is being smeared or repeated.
Google implemented leap-second smearing in 2011, before the big push towards cloud. So the need to communicate sub-second timestamps between internal Google systems and external systems was probably not so much on people's minds. But these days we're releasing a bunch of APIs, and sub-second timestamps might become a more important issue for some of them.
This is only an issue if you use the Timestamp to represent a human-readable time. There are more uses for timestamping than for display to a human operator. For example, one might use a timestamp in a software system to detect the passage of time, as in the use of a monotonic clock. In a real-time system you would ignore the presence of leap seconds because you will never examine the timing of your system relative to a Gregorian calendar. Rather, you just want to make sure that the station-keeping engine on your satellite burns for exactly 250 milliseconds, and leap seconds are of no use in that application.
It's a serialization format containing seconds and microseconds. You can put whatever you want in there, including true (non-Google) UTC time, right? This seems more like a documentation problem than an actual problem with Protobuf.
It saddens me that this is the top comment. It's complete and total FUD unrelated in any way to what Proto is, and to boot, it's an optional type, provided if you want it, but otherwise not forced to be used in any way! Scroll down the page for much more worthwhile discussions of Proto.
I'm glad they're willing to break compatibility to push their approach, because I think it's a better one. UTC with leap seconds is the worst of all possible worlds - not suitable for human time, not suitable for system time either - as perennial leap second bugs in such high-profile projects as the linux kernel demonstrate. Everyone seems to have agreed for years that basing system time on something without leap seconds would be better - whether that be leap smears or TAI - but no-one bothers to take action.
It's not a full protocol. It's a data type for a serialization library. You can write your own data types and they serialize just as well as the built-in types.
> that breaks interoperability
Wait, what was "broken" here? What was working before that isn't with this new release? What does this inclusion of a utility data type in a serialization library break that previously was intact?
- removing optional values is actually quite nice. In practice, I end up checking for "missing or empty string" anyway.
- the "well-known types" boxed primitive types essentially add optional values back in. And depending on your language bindings, may look the same.
- extensions are still allowed in proto3 syntax files, but only for options - since the descriptor is still proto2. It seems odd to build a proto3 that couldn't represent descriptors.
- I still don't understand the removal of unknown fields. Reserialization of unknown fields was always the first defining characteristic of protobufs I described to people. I actually read many of the design/discussion docs internally when I worked at Google, and I still couldn't figure this one out. Although it's certainly simpler…
- Protobufs are the "lifeblood" (Rob Pike's words) of Google: the protobuf team is working to get rid of significant Lovecraftian internal cruft, after which their ability to incorporate open source contributions should improve dramatically.
Slight correction: optional values are not removed. Quite the opposite; the "optional" keyword is removed because now all fields are optional. It is actually required values which were removed.
How does this compare or in general why would you pick this vs newer formats like Cap'n'proto or FlatBuffers?
From FlatBuffers overview I see this comparison:
---
Protocol Buffers is indeed relatively similar to FlatBuffers, with the primary difference being that FlatBuffers does not need a parsing/ unpacking step to a secondary representation before you can access data, often coupled with per-object memory allocation. The code is an order of magnitude bigger, too. Protocol Buffers has neither optional text import/export nor schema language features like unions.
I don't know, but I tried using protocol buffer once for mapbox vector files, the resulting C++ header was huge. It had templates and all sort of things, something like more than 1000 lines.
Cap'n'proto is more or less abandoned I believe.
But it and the flatbuffer approach gives very fast serialization and deserialization speed (essentially takes 0 times) but you pay a cost when you later access data, because it extracts the values you need on demand from the raw bytes.
I'm not sure it would often make much sense overall.
It's a pity that the "deterministic serialization" gives so few guarantees; I have worked on at least one project that really needed this.
(Basically, we wanted to parse a signed blob, do some work, and pass the original data on without breaking the signature; unfortunately, this requires keeping the serialized form around, since the serialized form cannot be re-generated from its parsed format.)
The main concern that the deterministic serialization isn't canonical is due to the unknown fields. As string and message type share the same wire type, when parsing an unknown string/message type, the parser has no idea whether to recursively canonicalize the unknown field.
The cross-language inconsistency is mainly due to the string fields comparison performance, i.e. java/objc uses utf16 encodings which has different orderings than utf8 strings due to surrogate pairs.
Feel free to start an issue on the github site asking for canonical serialization with your use case. We may change the deterministic serialization with stronger guarantee (e.g. cross language consistency) or add another API for canonical serialization.
Imagine working on a team that wants to move quickly but whose output is both a product and an API that's consumed by multiple other teams. The product you are building uses said API, but so do other teams. Your code needs to be stable enough to support these other teams needs (an API which doesn't change under them) but you also want to be able to make changes to your own application quickly, thus needing to change the API regularly.
A reasonable move is to version said API and have an ops team that ensures that all in-use versions of the API stay running. Some consumers will be on the bleeding edge, your team's application for example while others will lag behind.
Using proto* in this case is a reasonable move because you gain multiple benefits, performance being perhaps the least important in this case. Having a defined schema for your API provides some level of natural documentation for the API. Code generation allows your team to publish trusted client libraries for multiple languages.
I'll specifically call out client libraries since I've seen it make a dramatic difference in organizational efficiency, mostly to do with team to team trust levels. Without a client library the testing situation becomes a significant burden, read up on contract testing. When the team that's publishing an API also creates the client that most directly calls that API, the client library is the testing surface instead of every consumer of the API needing to test the API itself for regressions.
We use them internally at Square for our RPC mechanism ("Sake", similar to "Stubby", Google's internal RPC mechanism), for our Kafka-based logging/metrics/queue infrastructure, and for defining external JSON APIs. We're in the process of switching from Sake to GRPC, which also use Protobufs as their payload format (although you can sub in different transports).
> Protocol buffers are Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data – think XML, but smaller, faster, and simpler. You define how you want your data to be structured once, then you can use special generated source code to easily write and read your structured data to and from a variety of data streams and using a variety of languages.
I used protobuf as the output format for a web crawler. Workers read urls and sequentially write entire HTTP responses to disk. [0] Sure, you could serialize the responses to JSON, but the overhead of representing things like binary image data as escaped unicode strings was prohibitive in my case.
"Why not BSON?" Well, schemas can be nice when performance matters. Instead of solving a parsing problem at runtime, a C/C++ reader can contain a compiler-optimized deserializer for a given protobuf schema. It's almost like directly reading and writing an array of C structs, except protobuf is architecture-independent, and you can add new fields without breaking old readers.
There are plenty of reasons to not use protobuf. I particularly disliked the code generation step for C/C++. That makes even less sense in a language like Python, and yet that's exactly what the official python protobuf implementation from Google does (did?). I wrote a python protobuf library on top of a C protobuf library that avoids codegen: https://github.com/acg/lwpb
For me there are three main advantages: schema, performance and code generation.
Having a strict schema makes it a lot easier to maintain applications in a distributed system. Parsing protobuf is much faster than something like JSON. The multitude of code generators for protobuf make it really simple and easy to use multiple languages on the same data structures.
I used it in a trading system because it's a compact scheme for sending data across networks. It's also quite fast, and there's support for various languages. So you can have a feed handler blasting out prices using a c++ implementation, with a GUI drawing a chart written in c#.
Serializing data for RPC, network protocols or storage, description and serialization of configuration, serializable state, serializing complex types for cryptographic signing, etc.
Why is it useful? The schema both documents the data structure and allows mappings to natural APIs in many different languages. Parsers and encoders are generated for you, and are fast.
At Badoo we use them to have a unified API for all of our platforms (Web, Mobile Web, Android, iOS, Windows Phone etc). This would not have been possible without something like ProtoBuf.
Shocking! Google's started supporting more languages than just the ones they care about. I really hope this signals the death of their disdain culture.
Being a worthwhile Cloud provider means hiring experts in all sorts of languages and supporting their efforts.
Imagine a world where Google didnt just "support node" (YEARS late), but actually turned their v8 expertise into a Cloud product.
But that'd involve convincing Java-devs-turned-VPs to care about JavaScript, <2004>and EVERYONE knows that JavaScript is a terrible language.</2004>
Sadly the JSON format they chose isn't actually suitable for high-performance web apps. Web developers who use protobufs will continue to get by with various nonstandard JSON encodings.
I think it's more that GRPC (Google's RPC-over-HTTP2 protocol) directly supports Protobuf, and not Flatbuffers. All of Google's Cloud APIs use Protobuf (for example the [Speech API](https://cloud.google.com/speech/reference/rpc/) ).
I have to say, GRPC is pretty great. It's statically typed, supports loads of languages, the interfaces are simple to define (basically Protobuf), and it supports streaming requests! Most RPC systems omit that, or only have message streams (e.g. MQTT). Good RPC systems need both.
The only downside I find is that it is rather complicated (in design; not use).
Been using flatbuffers in production for a high speed market feed for a month now. Love it. Decode/encode time is absurdly fast (~1-2 microseconds for a small to medium schema). If you're pushing 50k+ events/second it can be a great choice. Takes up almost no space on the wire too.
> primitive fields set to default values (0 for numeric fields, empty for string/bytes fields) will be skipped during serialization.
I don't totally understand this. Presumably during deserialization they will be set to defaults and not missing? Otherwise, coupled with the removal of required fields, it seems impossible to actually send a 0-value number or empty string, or to send a proto without a field and not have it set to 0 or "" (have to explicitly null the field?).
I was hoping for packed serialization of non-primitive types. I once used Protobuf to serialize small point clouds, and ended up needing to serialize them as a packed double array and reconstruct the (x, y, z) structure at read time to avoid Protobuf malloc'ing each point individually. Not a huge deal, but it would be a real pain for more complex types.
Could someone explain to me why you would use Protocol Buffers, Cap'n Proto, etc versus rolling your own type-length-value protocol besides API interop?
What if your team could write a smaller TLV protocol, and it was necessary to keep your codebase small? Would this not be wise? Are Protobufs and party not comparable to TLV protocols?
In the vast majority of cases, you want your team to spend their time doing something other than reinventing protos, debugging the in-house implementation, maintaining the library, etc.
It's not clear to me anyway how doing it yourself would help keeping your codebase small vs using protos. In terms of code to maintain, doing it yourself is a net loss. In terms of binary size and method count, the proto libraries for Objective-C and Android are optimized like crazy.
C# binary serialization is only useful in certain circumstances. It doesn't work outside the .NET world and it even has compatibility problems within the .NET world—you can break deserialization by making certain changes to your code. From the Microsoft documentation:
> The state of a UTF-8 or UTF-7 encoded object is not preserved if the object is serialized and deserialized using different .NET Framework versions.
Performance and data size are much better with protobufs: http://stackoverflow.com/questions/549128/fast-and-compact-o.... Built-in serializers are only workable when both ends are on the same platform (i.e. .Net), and even then class versioning can be a problem.
amluto|9 years ago
> Added well-known type protos (any.proto, empty.proto, timestamp.proto, duration.proto, etc.). Users can import and use these protos just like regular proto files. Additional runtime support are available for each language.
From timestamp.proto:
Nice, sort of -- all UTC times are representable. But you can't display the time in normal human-readable form without a leap-second table, and even their sample code is wrong is almost all cases: That's only right if you run your computer in Google time. And, damn it, Google time leaked out into public NTP the last time their was a leap second, breaking all kinds of things.Sticking one's head in the sand and pretending there are no leap seconds is one thing, but designing a protocol that breaks interoperability with people who don't bury their heads in the sand is another thing entirely.
Edit: fixed formatting
justinsaccount|9 years ago
https://googleblog.blogspot.com/2011/09/time-technology-and-...
I think that the approach everything else uses is the "sticking your head in the sand approach". You basically pretend that there is no problem and that time is perfectly accurate, up until you have a minute with 59 or 61 seconds.
Just because suddenly trying to handle "Oh shit, everything is off by an entire second!" is the approach everything else uses doesn't mean it is the right approach.
wongarsu|9 years ago
No matter whether you like "google time" or not, this is horrible documentation. They are glossing over an issue which should be marked with big red letters.
haberman|9 years ago
What I'm trying to say is that I think this is a smearing systems vs. non-smearing systems issue, and not so much a timestamp.proto issue. timestamp.proto mentions smearing but really it's just a vehicle for storing the seconds/nanos from the system clock, with whatever semantics that system clock uses. Because in practice systems don't give you access to both the smeared and non-smeared values; you get whatever the system gives you. The remarks about being leap-second-ignorant apply whether the leap second is being smeared or repeated.
Google implemented leap-second smearing in 2011, before the big push towards cloud. So the need to communicate sub-second timestamps between internal Google systems and external systems was probably not so much on people's minds. But these days we're releasing a bunch of APIs, and sub-second timestamps might become a more important issue for some of them.
So I think this issue is worth discussing further, and I opened an issue on GitHub to track it: https://github.com/google/protobuf/issues/1890
Thanks for the feedback.
jschwartzi|9 years ago
icedchai|9 years ago
jhspaybar|9 years ago
cbsmith|9 years ago
lmm|9 years ago
brazzledazzle|9 years ago
unknown|9 years ago
[deleted]
madgar|9 years ago
It's not a full protocol. It's a data type for a serialization library. You can write your own data types and they serialize just as well as the built-in types.
> that breaks interoperability
Wait, what was "broken" here? What was working before that isn't with this new release? What does this inclusion of a utility data type in a serialization library break that previously was intact?
zxv|9 years ago
The dependence on "smeared" leap seconds sure sounds like a dependence on such a time server.
Ouch.
Nullabillity|9 years ago
zellyn|9 years ago
- the "well-known types" boxed primitive types essentially add optional values back in. And depending on your language bindings, may look the same.
- extensions are still allowed in proto3 syntax files, but only for options - since the descriptor is still proto2. It seems odd to build a proto3 that couldn't represent descriptors.
- I still don't understand the removal of unknown fields. Reserialization of unknown fields was always the first defining characteristic of protobufs I described to people. I actually read many of the design/discussion docs internally when I worked at Google, and I still couldn't figure this one out. Although it's certainly simpler…
- Protobufs are the "lifeblood" (Rob Pike's words) of Google: the protobuf team is working to get rid of significant Lovecraftian internal cruft, after which their ability to incorporate open source contributions should improve dramatically.
tantalor|9 years ago
Slight correction: optional values are not removed. Quite the opposite; the "optional" keyword is removed because now all fields are optional. It is actually required values which were removed.
teacup50|9 years ago
I feel the opposite; this greatly reduces the utility of protobuf.
Previously, I could trust that if parsing succeeded, then I had a guarantee of a populated data structure.
Now, I have to check each field individually, in manually written code, to verify that no required fields are missing.
That's really lame, and a huge step backwards.
rdtsc|9 years ago
From FlatBuffers overview I see this comparison:
---
Protocol Buffers is indeed relatively similar to FlatBuffers, with the primary difference being that FlatBuffers does not need a parsing/ unpacking step to a secondary representation before you can access data, often coupled with per-object memory allocation. The code is an order of magnitude bigger, too. Protocol Buffers has neither optional text import/export nor schema language features like unions.
---
So are the newer ones useful mostly when serialization vs deserialization speed matters (https://google.github.io/flatbuffers/) ?
cbsmith|9 years ago
jokoon|9 years ago
jackmott|9 years ago
I'm not sure it would often make much sense overall.
JoachimSchipper|9 years ago
It's a pity that the "deterministic serialization" gives so few guarantees; I have worked on at least one project that really needed this.
(Basically, we wanted to parse a signed blob, do some work, and pass the original data on without breaking the signature; unfortunately, this requires keeping the serialized form around, since the serialized form cannot be re-generated from its parsed format.)
pherl|9 years ago
The cross-language inconsistency is mainly due to the string fields comparison performance, i.e. java/objc uses utf16 encodings which has different orderings than utf8 strings due to surrogate pairs.
Feel free to start an issue on the github site asking for canonical serialization with your use case. We may change the deterministic serialization with stronger guarantee (e.g. cross language consistency) or add another API for canonical serialization.
cbsmith|9 years ago
I'd want to always work from the signed blob.
That said, this is one reason to use flatbuffers/capt'n proto I guess: you don't have to worry about this since you never unpack the blob.
jalfresi|9 years ago
Does anyone know if this means Google's public APIs will be proto3 based? I quite like protobufs.
agency|9 years ago
[1] https://cloud.google.com/blog/big-data/2016/03/announcing-gr...
unknown|9 years ago
[deleted]
unknown|9 years ago
[deleted]
manish_gill|9 years ago
wwalser|9 years ago
A reasonable move is to version said API and have an ops team that ensures that all in-use versions of the API stay running. Some consumers will be on the bleeding edge, your team's application for example while others will lag behind.
Using proto* in this case is a reasonable move because you gain multiple benefits, performance being perhaps the least important in this case. Having a defined schema for your API provides some level of natural documentation for the API. Code generation allows your team to publish trusted client libraries for multiple languages.
I'll specifically call out client libraries since I've seen it make a dramatic difference in organizational efficiency, mostly to do with team to team trust levels. Without a client library the testing situation becomes a significant burden, read up on contract testing. When the team that's publishing an API also creates the client that most directly calls that API, the client library is the testing surface instead of every consumer of the API needing to test the API itself for regressions.
zellyn|9 years ago
dkopi|9 years ago
From https://developers.google.com/protocol-buffers/
sigil|9 years ago
I used protobuf as the output format for a web crawler. Workers read urls and sequentially write entire HTTP responses to disk. [0] Sure, you could serialize the responses to JSON, but the overhead of representing things like binary image data as escaped unicode strings was prohibitive in my case.
"Why not BSON?" Well, schemas can be nice when performance matters. Instead of solving a parsing problem at runtime, a C/C++ reader can contain a compiler-optimized deserializer for a given protobuf schema. It's almost like directly reading and writing an array of C structs, except protobuf is architecture-independent, and you can add new fields without breaking old readers.
There are plenty of reasons to not use protobuf. I particularly disliked the code generation step for C/C++. That makes even less sense in a language like Python, and yet that's exactly what the official python protobuf implementation from Google does (did?). I wrote a python protobuf library on top of a C protobuf library that avoids codegen: https://github.com/acg/lwpb
[0] See the ARC format used by the Internet Archive for a similar (and imo clunkier) solution. http://crawler.archive.org/articles/developer_manual/arcs.ht...
phamilton|9 years ago
Having a strict schema makes it a lot easier to maintain applications in a distributed system. Parsing protobuf is much faster than something like JSON. The multitude of code generators for protobuf make it really simple and easy to use multiple languages on the same data structures.
lordnacho|9 years ago
arnarbi|9 years ago
Why is it useful? The schema both documents the data structure and allows mappings to natural APIs in many different languages. Parsers and encoders are generated for you, and are fast.
NikhilVerma|9 years ago
nawitus|9 years ago
rainhacker|9 years ago
gonyea|9 years ago
Being a worthwhile Cloud provider means hiring experts in all sorts of languages and supporting their efforts.
Imagine a world where Google didnt just "support node" (YEARS late), but actually turned their v8 expertise into a Cloud product.
But that'd involve convincing Java-devs-turned-VPs to care about JavaScript, <2004>and EVERYONE knows that JavaScript is a terrible language.</2004>
skybrian|9 years ago
positr0n|9 years ago
detaro|9 years ago
the_duke|9 years ago
mattiemass|9 years ago
grosbisou|9 years ago
forrestthewoods|9 years ago
https://github.com/google/flatbuffers
IshKebab|9 years ago
I have to say, GRPC is pretty great. It's statically typed, supports loads of languages, the interfaces are simple to define (basically Protobuf), and it supports streaming requests! Most RPC systems omit that, or only have message streams (e.g. MQTT). Good RPC systems need both.
The only downside I find is that it is rather complicated (in design; not use).
alfalfasprout|9 years ago
zbjornson|9 years ago
I don't totally understand this. Presumably during deserialization they will be set to defaults and not missing? Otherwise, coupled with the removal of required fields, it seems impossible to actually send a 0-value number or empty string, or to send a proto without a field and not have it set to 0 or "" (have to explicitly null the field?).
prattmic|9 years ago
Since the client can handle this, there is no need to explicitly serialize default values.
blt|9 years ago
unknown|9 years ago
[deleted]
andrewmcwatters|9 years ago
What if your team could write a smaller TLV protocol, and it was necessary to keep your codebase small? Would this not be wise? Are Protobufs and party not comparable to TLV protocols?
euyyn|9 years ago
It's not clear to me anyway how doing it yourself would help keeping your codebase small vs using protos. In terms of code to maintain, doing it yourself is a net loss. In terms of binary size and method count, the proto libraries for Objective-C and Android are optimized like crazy.
dyoo1979|9 years ago
wehadfun|9 years ago
klodolph|9 years ago
http://stackoverflow.com/questions/703073/what-are-the-defic...
C# binary serialization is only useful in certain circumstances. It doesn't work outside the .NET world and it even has compatibility problems within the .NET world—you can break deserialization by making certain changes to your code. From the Microsoft documentation:
> The state of a UTF-8 or UTF-7 encoded object is not preserved if the object is serialized and deserialized using different .NET Framework versions.
(From https://msdn.microsoft.com/en-us/library/72hyey7b(v=vs.110)....)
Also see https://msdn.microsoft.com/en-us/library/ms229752(v=vs.110)....
bmm6o|9 years ago
recursive|9 years ago