(no title)
dpeckett | 1 year ago
Been increasingly using LSP style JSON-RPC 2.0, sure it's got it's quirks and is far from the most wire/marshaling efficient approach but JSON codecs are ubiquitous and JSON-RPC is trivial to implement. In-fact I recently even wrote a stack allocated, server implementation for microcontrollers in Rust https://github.com/OpenPSG/embedded-jsonrpc.
Varlink (https://varlink.org/) is another interesting approach, there's reasons why they didn't implement the full JSON-RPC spec but their IDL is pretty interesting.
elcritch|1 year ago
Take JSON-RPC and replace JSON with MsgPack for better handling of integer and float types. MsgPack/CBOR are easy to parse in place directly into stack objects too. It's super fast even on embedded. I've been shipping it for years in embedded projects using a Nim implementation for ESP32s (1) and later made a non-allocating version (2). It's also generally easy to convert MsgPack/CBOR to JSON for debugging, etc.
There's also an IoT focused RPC based on CBOR that's an IETF standard and a time series format (3). The RPC is used a fair bit in some projects.
1: https://github.com/elcritch/nesper/blob/devel/src/nesper/ser... 2: https://github.com/EmbeddedNim/fastrpc 3: https://hal.science/hal-03800577v1/file/Towards_a_Standard_T...
bccdee|1 year ago
Honestly, if protobuf just serialized to a strictly-specified subset of json, I'd be happy with that. I'm not in it for the fast ser/de, and something human-readable could be good. But when multiple services maintained by different teams are passing messages around, a robust schema language is a MASSIVE help. I haven't used Avro, but I assume it's similarly useful.
perezd|1 year ago
jeffrallen|1 year ago
jcmfernandes|1 year ago
rochacon|1 year ago
- dRPC (by Storj): https://drpc.io (also compatible with gRPC)
- Twirp (by Twitch): https://github.com/twitchtv/twirp (no gRPC compatibility)
bbkane|1 year ago
crabmusket|1 year ago
Me too. My context is that I end up using RPC-ish patterns when doing slightly out-of-the-ordinary web stuff, like websockets, iframe communications, and web workers.
In each of those situations you start with a bidirectional communication channel, but you have to build your own request-response layer if you need that. JSON-RPC is a good place to start, because the spec is basically just "agree to use `id` to match up requests and responses" and very little else of note.
I've been looking around for a "minimum viable IDL" to add to that, and I think my conclusion so far is "just write out a TypeScript file". This works when all my software is web/TypeScript anyway.
dpeckett|1 year ago
girvo|1 year ago
hansvm|1 year ago
State of the art for both gzipped json and protobufs is a few GB/s. Details matter (big strings, arrays, and binary data will push protos to 2x-10x faster in typical cases), but it's not the kind of landslide victory you'd get from a proper binary protocol. There isn't much need to feel like you're missing out.
jlouis|1 year ago
5-10x is not uncommon, and that's kissing an order of magnitude difference.
imtringued|1 year ago
Meanwhile if you care about parsing speed, there is MessagePack and CBOR.
If any form of parsing is too expensive for you, you're better off with FlatBuffers and capnproto.
Finally there is the holy grail: Use JIT compilation to generate "serialization" and "deserialization" code at runtime through schema negotiation, whenever you create a long lived connection. Since your protocol is unique for every (origin, destination) architecture+schema tuple, you can in theory write out the data in a way that the target machine can directly interpret as memory after sanity checking the pointers. This could beat JSON, MessagePack, CBOR, FlatBuffers and capnproto in a single "protocol".
And then there is protobuf/grpc, which seems to be in this weird place, where it is not particularly good at anything.
lowbloodsugar|1 year ago
ajross|1 year ago
There are just very few spots that actually "need" protobuf at a level of urgency that would justify walking away from self-describing text formats (which is a big, big disadvantage for binary formats!).
[1] Something very well served by JSON
[2] Network routing, stateful packet inspection, on-the-fly transcoding. Stuff that you'd never think to use a "standard format" for.
bboygravity|1 year ago
That means potentially: the majority of devices in the world.
malkia|1 year ago
dpeckett|1 year ago
Also JS now has BigInt types and the JSON decoder can be told to use them. So I'd argue it's kind of a moot point at this stage.
mirekrusin|1 year ago
Cthulhu_|1 year ago