(no title)
krut-patel | 2 years ago
But how does it determine when it should write to disk? Does every write to a potentially OOM operation get preceeded by a check? Take the case of a HashAggregate. The DB clearly cannot know at compile time how many unique keys will be present in the hashtable; it needs to resize at runtime. So does that mean all the hashtables are still using some form of Bump/Arena allocators backed by the pre-allocated memory?
Maybe I should just read the source code :)
eatonphil|2 years ago
You write fixed sized number of key-value pairs to the file at a time. This is how LSM trees work, you chunk your data up into N sorted keys per chunk. I don't myself understand all the specifics but this is the gist.
> Does every write to a potentially OOM operation get preceeded by a check?
If you allocate memory upfront and don't allocate any more memory, you can't OOM after the initial allocation. That's what TigerBeetle does.
Zig has some nice standard library containers for adding items while asserting that there's capacity. If we miscalculate, it is caught during tests because assertions fail.