top | item 42289354 (no title) Mave83 | 1 year ago Small objects are very inefficient in s3. Aggregate them together and form bigger log objects is critical to go from a small system log to a real environment. discuss order hn newest vitaliyf|1 year ago The company I work for open-sourced a straightforward library that does exactly that: https://github.com/embrace-io/s3-batch-object-store avinassh|1 year ago (author here)definitely!I plan to add a batch write API. Also, an API where it buffers till it reaches certain size or a timeout to write to S3tracking the batch write here: https://github.com/avinassh/s3-log/issues/3 MadsRC|1 year ago This is why systems such as WarpStream regularly runs compaction jobs to more efficiently store objects and cut down on API calls.
vitaliyf|1 year ago The company I work for open-sourced a straightforward library that does exactly that: https://github.com/embrace-io/s3-batch-object-store
avinassh|1 year ago (author here)definitely!I plan to add a batch write API. Also, an API where it buffers till it reaches certain size or a timeout to write to S3tracking the batch write here: https://github.com/avinassh/s3-log/issues/3
MadsRC|1 year ago This is why systems such as WarpStream regularly runs compaction jobs to more efficiently store objects and cut down on API calls.
vitaliyf|1 year ago
avinassh|1 year ago
definitely!
I plan to add a batch write API. Also, an API where it buffers till it reaches certain size or a timeout to write to S3
tracking the batch write here: https://github.com/avinassh/s3-log/issues/3
MadsRC|1 year ago