(no title)
Newtons4thLaw | 2 years ago
While required-by-default is more natural IMO, and more performant, etc. It does require a little more foresight by the programmer for sure. But, honestly, for those who want to play it safe (or not think about it), you can just mark everything tagged and pretend like you're still using protobuf : vP Although, it would hurt me a little bit on the inside...
The way we typically see definitions evolving is you'll create a struct with required fields first. Then if you need to augment it later on, you can add tagged fields, in a fully compatible manner. If one day you're in the rarer case where you want to remove a required field, you still can of course, you just need to be careful and keep your endpoints in sync!
> Regarding varints, how does your QUIC like implementation compares to something like this
So, the most obvious difference is in the supported ranges. An advantage of their varint is it can send a full 64 bit value, whereas ours can only go to 62 bits. We make this clear in the type names (`varint62` and `varuint62`) to avoid surprise, but it's a limitation.
I bet this is negligible in practice though, 62 bits is still ALOT of precision. It's the difference between: [0 .. 18,446,744,073,709,551,615] and [0 .. 4,611,686,018,427,387,903]. Odds are 4 quintillion is enough for what you're doing.
More on the nitty-gritty side, I think their encoding is actually pretty neat! The biggest difference is in granularity. They have 9 different step-sizes, whereas we only have 4. So, on average, they'll achieve a better 'compression ratio' than us, but we're basically tied up until 14 bits of precision. Then it alternates: for 15~21 bits theirs is more efficient, 29~30 ours is, 31~49 theirs is, then after 50 ours is. So like I said, on average they edge out the QUIC specification's encoding, due to more granular sizings.
Then on the performance side, I expect ours is slightly more efficient. But without a benchmark that's just pure heresay! Our size is encoded as `2^(the 2 least significant bits)`. Theirs is encoded by the number of leading zeros in binary. These are both a single instruction on any modern architecture, but counting the leading zeros is just a more complex operation.
But this just lets me plug another cool feature in Slice: `custom`[1] types(shameless, I know :^)). These let you hook your own types (with their own custom encodings) into all our machinery. In Rust, you just implement a trait, in C# you write an extension function.
So, if you're in a really bandwidth-constrained environment, and know that `vint64` will be better than our built-in types... you can totally use it with Slice!
[1] https://docs.icerpc.dev/slice2/language-guide/custom-types
zigzag312|2 years ago
I think this is spot on. Each data container usually has "core" fields that are unlikely to change. Making core fields required and others tagged, gives you simplicity for former and flexibility for latter fields.
I really appreciate the flexibility of Slice IDL. FlatBuffers has table vs struct types, but that just doesn't offer the same fine control of tagged vs untagged fields.
> But this just lets me plug another cool feature in Slice: `custom`[1] types(shameless, I know :^)). These let you hook your own types (with their own custom encodings) into all our machinery. In Rust, you just implement a trait, in C# you write an extension function.
Can I map multiple custom types to the same C# type? :)
or That would be really useful.delaconcha|2 years ago
Yes that is totally fine. In C# side both would be represented as "long[]", the generated code would use the appropriated encoding for the custom type, as provided by the user methods.