(no title)
sluongng | 1 month ago
Another version of this is to use grpc to communicate the "metadata" of a download file, and then "side" load the file using a side channel with http (or some other light-weight copy methods). Gitlab uses this to transfer Git packfiles and serve git fetch requests iirc https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/sidec...
pipo234|1 month ago
Relying on http has the advantage that you can leverage commodity infrastructure like caching proxies and CDN.
Why push protobuf over http when all you need is present in http already?
avianlyric|1 month ago
If moving big files around is a major part of the system you’re building, then it’s worth the effort. But if you’re only occasionally moving big files around, then reusing your existing gRPC infrastructure is likely preferable. Keeps your systems nice and uniform, which make it easier to understand later once you’ve forgotten what you originally implemented.
sluongng|1 month ago
For example, there are common metadata such as the digest (hash) of the blob, the compression algorithm, the base compression dictionary, whether Reed-Solomon is applicable or not, etc...
And like others have pointed out, having existing grpc infrastructure in place definitely helps using it a lot easier.
But yeah, it's a tradeoff.
ithkuil|1 month ago
https://github.com/mkmik/byter