(no title)
feike | 1 year ago
(On some metrics data internally, I have 98% reduction in size of the data).
One of the reasons this works is due to only having to pay the per-tuple overhead once per grouped row, which could be as much as a 1000 rows.
The other is the compression algorithm, which can be TimescaleDB or plain PostgreSQL TOAST
https://www.timescale.com/blog/time-series-compression-algor... https://www.postgresql.org/docs/current/storage-toast.html
No comments yet.