top | item 42708425

(no title)

jwillp | 1 year ago

What is the minimum resident RAM size per individual active unique series? Or what's a typical RSS RAM size for 10 or 100 million unique active series? How does unlimited cardinality avoid RAM exhaustion in this version?

discuss

order

pauldix|1 year ago

Core doesn't index the metadata so it uses less RAM for higher cardinality data. However, if you have 100M series and you're writing to all of them at the same time, you're going to need some amount of RAM just to buffer it all up and then ship it off to storage as Parquet. The Enterprise product has a compactor that creates indexes as it goes, but those indexes are lighter weight than those in v1 and v2. Also, users can specify which columns they want to appear in those indexes, so they can leave out high cardinality ones if they want to save on RAM. In v3 you can brute force the query against high cardinality data, unlike v1 & v2, which would eat up a ton of RAM to do so.