top | item 35634673

How RocksDB Works

285 points| DAlperin | 2 years ago |artem.krylysov.com

69 comments

order

jlokier|2 years ago

One thing about LSM trees that are implemented with large numbers of large files in a filesystem, such as RocksDB, is that they defer to the filesystem to deal with fragmentation and block lookup isues. That's not actually free.

LSM tree descriptions typically imply or say outright that each layer is laid out linearly, written sequentially, and read sequentally for merging. And that looking up a block within a layer is an O(1) operation, doing random access I/O to that location.

But really, the underlying filesystem is doing a lot of heavy lifting. It's maintaining the illusion of linear allocation by hiding how the large files are fragmented. That sequential writing is mostly sequential, but typically becomes more fragmented in the filesystem layer as the disk gets closer to full, and over time as various uses of the filesystem mean there are fewer large contiguous regions. More fragmented free space makes the allocation algorithms have to do more work, sometimes more I/O, just to allocate space for the LSM tree's "linear" writes.

Lookup of a block inside a layer requires the filesystem to lookup in its extent tree or, with older filesystems, through indirect block lookups. Those are hidden from the LSM tree database, but are not without overhead.

Writing sequentially to a layer generally requires the filesystem to update its free space structures as well as its extent tree or indirect blocks.

Even a simple operation like the LSM tree database deleting a layer file it has finished with, is not necessarily simple and quick at the filesystem layer.

In other words, when analysing performance, filesystems are the unsung heroes underlying some LSM tree databases. Their algorithmic overhead is often not included in the big-O analysis of LSM tree algorithms running over them, but should be, and their behaviour changes as disk space shrinks and over time due to fragmentation.

eclark|2 years ago

>But really, the underlying filesystem is doing a lot of heavy lifting.

I think that's vastly under selling what's done to ensure that each block is written linearly, blocks are structured, sized, written and accessed in a way that the filesystem does very little (directio, fadvise, droping caches on writes, etc). I was in total agreement with you, for a long time. The rocksdb devs have put in the work, and tuning rocksdb usually gets faster the less the FS does.

Lately linear reads and writes are not why one is choosing LSM's in a datacenter setting. Access times of even cheap slow ssd's are amazing.

They are used for controlling write amplification with tunable known costs. That is you write fewer hardware blocks to the flash chips with a well tuned rocksdb.

vlovich123|2 years ago

I agree that DB papers will typically overlook the impact the filesystem has on the database (not just rocksdb - what you wrote is true for everything except something like BlueStore). It’s particularly depressing when you look at how they measure write amplification which tends to ignore things they’re just offloading to the filesystem.

However, I think you’re making a mistake on a core part of your argument:

> More fragmented free space makes the allocation algorithms have to do more work, sometimes more I/O, just to allocate space for the LSM tree's "linear" writes.

The file system in no way needs to guarantee on-disk contiguity for read or write performance, nor does any online defrag need to happen. Indeed, the whole premise behind LSM trees is to try to optimize around solid state storage. AFAIK if the filesystem can only find 1 MiB blocks it will allocate them at the cost of a larger set of extents (there’s also defrag happening). Typically the filesystems do a fantastic job of defrag too. That’s certainly an important part but I’d say those parts of the filesystem are likely the first things implemented and never/rarely changed (just a hunch - I haven’t actually bothered looking at the Linux changelog).

Also no one is really going to care about performance on an almost full filesystem (kind full like 75% but old so lots of fragments is valid but I doubt it’s actually a problem because of how good filesystems are).

l05k|2 years ago

I would recommend a related paper, "Building an Efficient Key-Value Store in a Flexible Address Space", which focused on the interaction problem between extent-based filesystem and key-value store.

However, in scenarios that really care about performance, memory-mapped files seem to be able to bypass the I/O stack and optimize performance. RocksDB also has corresponding optimizations: https://github.com/facebook/rocksdb/wiki/PlainTable-Format

midom|2 years ago

this misses multiple points - lots of block access happens via internal caches (hyperclock/lru/etc) and filesystem isn't in the critical path as much as you'd think.

files are mapped out in relatively large chunks, especially compaction outputs - there's prealloaction, and usually you will just have a flat file->block conversion without huge trees or anything.

based on performance profiles filesystem doesn't do any heavy lifting, there's not that much fragmentation (and you usually keep some free space for flash GC anyway),

compaction output write is one logical operation on filesystem for tens of megabytes of data.

> filesystems are the unsung heroes underlying some LSM tree databases

meh

schwartzie|2 years ago

Congratulations to the author for a remarkably clear, easy-to-follow, and informative post. Excellent technical writing!

tmulcahy|2 years ago

Agreed. This is the best explanation of Log-Structured Merge Trees I've seen. I finally feel confident that I understand the concept.

benjaminwootton|2 years ago

I was about to post the same. One of the best technical articles I've ever read.

willvarfar|2 years ago

A bit of a tangent, but HNers often have the kind of hands-on experience that's hard to find in internet searches, so I'll ask away :)

A long time ago we had a big MySQL tokudb db and were keen to migrate to myrocks. But myrocks put every table into a single big file, rather than a file per partition.

The partition-per-file is a big deal if you are retaining N days of data in a DB and every night will be dropping some old day. If your DB stores each partition in separate files, the DB can simply delete them. But if your DB stores all the partitions in a single file, then it will end up having to compact your absolutely massive big dataset. It was completely unworkable for us.

Has this changed?

jorangreef|2 years ago

Hey Will! Joran from TigerBeetle here.

Partitioning data across files (or LSM trees) can be a remarkable win. For data retention policies, as well as for exploiting immutability in different workloads to reduce write amplification.

For example, in TigerBeetle, a DB that provides double-entry financial accounting primitives, our secondary indexes mutate, but half of our ingest volume, all the transactions themselves are immutable, and inserted in chronological order.

We therefore designed our local storage engine as an LSM-forest, putting different key/value types in their own tree, so that mutable data wouldn't compact immutable data. This turns our object tree for primary keys into essentially an append-only log.

I did a lightning talk on this, and a few of our other LSM optimizations, at Jamie Brandon's HYTRADBOI conference last year: https://www.youtube.com/watch?v=yBBpUMR8dHw

RocksDB also allows you to do this, with its concept of column families, if I am not mistaken. However, we wanted more memory efficiency with static memory allocation, deterministic execution and deterministic on disk storage for faster testing (think FoundationDB's simulator but with storage fault injection) and faster recovery (thanks to smaller diffs, with less randomness in the data files being recovered), and also an engine that could solve our storage fault model.

All details in the talk. Or ping me if you have questions.

midom|2 years ago

something doesn't make sense here - MySQL/InnoDB does put tables into files, but partitions get separate file.

MyRocks has a collection of files per each column family, and when you drop data it can quickly expunge files that don't contain data for other tables/partitions - and trigger compaction on neighbors, if needed.

nitinreddy88|2 years ago

I am looking for optimal storage engine(KV) which can store operational telemetry (temporarily) at source node. As we know, operational telemetry is generated frequently and need to merge similar operations frequently (little compaction). Once it reaches good amount of size (100mb), we can transfer it to dedicated time series database engines through various mechanisms. I am struggling to find a fast, write heavy, memory optimal storage for this.

RocksDB seems to fit few boxes but there could be much better solution as we don't need deletes/range scans sort of operations.

Any suggestions?

liketochill|2 years ago

Victoria metrics can store data at local node and forward to central when network is functioning

seedless-sensat|2 years ago

Deletes (tombstones) shouldn't really get in your way if you don't need them. Similarly, range scans just come for free from SSTables being sorted. Archiving RocksDB SSTable files can be a decent strategy.

One thing to pay attention to is if your telemetry data is indexed by timestamp (i.e. you're writing to the keyspace in order), the compaction of immutable SSTables layers could be wasteful? Although, the author's nice example of non-overlapping SSTables key ranges suggests there may be minimal write amplification here too.

winrid|2 years ago

Pipe it to a write-ahead file on the source host, and read it back and store an offset for reading when you commit. That will be a very optimal solution.

But really, I would benchmark sqlite first...

dkarpas|2 years ago

Speedb is a great option Please check us out and join the most active RocksDB community - https://discord.gg/5fVUUtM2cG You can share your logs there or send them directly to us for analysis Contact me if you have any questions - dan@speedb.io

didgetmaster|2 years ago

I am building my own KV store (www.Didgets.com) that can store 10 million KV pairs in about 4 seconds (using my Ryzen 5950X machine). Those pairs take up about 300MB of disk space and have about the same memory footprint when loaded. The values have a variety of data types (strings, integers, doubles, datetime, etc.) The software is still in beta but is available for free download.

dikei|2 years ago

I think Rocksdb would be a good choice, you really can't beat a LSM design for write-heavy workload. Depending on how the mem-table is implemented, they can write at practically RAM-speed. And although you might think that you don't need range scans now, it's a very useful for any kind of time series data.

foota|2 years ago

What kind of telemetry exactly?

Maybe I don't understand the problem, but can you not just store it in memory (e.g., with a map from key to current value), update (for instance, increment) it as you go, and whenever you want to take a timeseries value just push the set of current values back to a vector?

digikata|2 years ago

Write Apache arrow parquet files directly? A lot of other dbs can ingest those directly anyway.

RhodesianHunter|2 years ago

Any reason you can't shove it into Kafka?

dikei|2 years ago

RocksDB is awesome, though don't use it with regular glibc malloc because it can cause extreme memory fragmentation. Use jemalloc, tcmalloc, or mimalloc: basically any other advance malloc libraries that can effectively reuse memory.

jeffbee|2 years ago

This goes for pretty much every C++ program. I doubt there are any useful programs for which the GNU allocator is optimal.

adev_|2 years ago

RocksDB is an amazing piece of engineering that deserve to be more known.

It is battle tested. It does one job and does it well.

I have used it in the past as a middleware database taking an average of 2-3k req/sec with over 400 GB of data stored. It works like a charm.

If I had a single reproach to do to it, it would be around the instrumentation. It is not that straightforward to get proper metrics and reporting of the internals.

zip1234|2 years ago

Well written article--clarification on how Meta uses it though. It is not Tao it is ZippyDb: https://engineering.fb.com/2021/08/06/core-data/zippydb/

ipozgaj|2 years ago

I am lucky enough to have worked on all three of these systems (TAO, ZippyDB, and currently MySQL) so can shed some light here.

Both MySQL and ZippyDB are datastores that use RocksDB under the hood, in a slightly different way and with different querying capabilities exposed to the end user. ZippyDB uses it exclusively, but MySQL uses both the traditional InnoDB and RocksDB (MyRocks). TAO is in memory graph database, layer above both of these, and doesn't persist anything by itself - it talks to the database layer (MyRocks).

polishdude20|2 years ago

How does flushing in a background process work if it says that it's an embeddable database that's in your application? It says there is no external process so how is there a background process that performs compaction and flushing?

remram|2 years ago

It's a thread: https://artem.krylysov.com/blog/2023/04/19/how-rocksdb-works...

> RocksDB runs a dedicated background thread that persists immutable memtables to disk.

They are using "process" to mean "mechanism", something that happens, not a literal OS process. I agree that it's a bit confusing to use the word both ways.

kilotaras|2 years ago

> To find a specific key, we could use binary search on the SST file blocks.

I don't think it's possible except when both key and value are fixed size (which is not the case in the example shown).

KAdot|2 years ago

Binary searching SST file blocks is a stretch, I agree. The key-value pairs would need to have a specific shape. Enabling compression makes it completely impossible. I'll remove this from the article, thanks for the feedback!

attrutt|2 years ago

Damn, there's some astonishing writing skills going on here

Tim25659|2 years ago

Great post on rocksdb