(no title)
MatthiasPortzel | 2 months ago
=> https://tigerbeetle.com/blog/2022-10-12-a-database-without-d...
It's always bad to use O(N) memory if you don't have to. With a FS-backed database, you don't have to. (Whether you're using static allocation or not. I work on a Ruby web-app, and we avoid loading N records into memory at once, using fixed-sized batches instead.) Doing allocation up front is just a very nice way of ensuring you've thought about those limits, and making sure you don't slip up, and avoiding the runtime cost of allocations.
This is totally different from OP's situation, where they're implementing an in-memory database. This means that 1) they've had to impose a limit on the number of kv-pairs they store, and 2) they're paying the cost for all kv-pairs at startup. This is only acceptable if you know you have a fixed upper bound on the number of kv-pairs to store.
matklad|2 months ago
As a tiny nit, TigerBeetle isn't _file system_ backed database, we intentionally limit ourselves to a single "file", and can work with a raw block device or partition, without file system involvement.
fsckboy|2 months ago
those features all go together as one thing. and it's the unix way of accessing block devices (and their interchangeability with streams from the client software perspective)
you're right, it's not the file system.
levkk|2 months ago
Memcached works similarly (slabs of fixed size), except they are not pre-allocated.
If you're sharing hardware with multiple services, e.g. web, database, cache, the kind of performance this is targeting isn't a priority.