(no title)
krut-patel | 2 years ago
> use a GeneralPurposeAllocator and setting "enable_memory_limit" and "requested_memory_limit"
Interesting! I hadn't looked at GeneralPurposeAllocator too closely, but yes these seem like the right way to do things instead of abusing FixedBufferAllocator as I did.
> If the purpose is to "only use the stack"...
Not really, I just had to decide on some arbitrary upper bound on the mem usage, and the default stack size (8MiB) seemed like a decent choice. In retrospect, this challenge only took shape because my solution to Day1 used a FixedBufferAllocator backed by a buffer on the stack, and I realized how easy Zig made it to track allocs. I didn't fiddle too much with the general structure of the solution after that, and made it a "challenge" to see how far I could take it.
> Another potential challenge is to pre-allocate instead
Ah, that sounds much more difficult. This is also what TigerBeetle is doing [1]. But one thing I didn't understand even from that post, how would one deal with data structures that really depend on the input data, like the hashsets in TFA? Simplest way I can think of is to have an arbitrary upperbound on the allocated memory and then keep checking before every operation on any dynamic structure. That sounds tedious. Is there a better way?
[1]: https://tigerbeetle.com/blog/a-database-without-dynamic-memo...
messe|2 years ago
If this is correct, it's likely an undocumented implementation detail, so probably not something you should rely on always being the case.
charcircuit|2 years ago
Just let the kernel handle it. The virtual memory and mapped memory abstraction the kernel has makes your program's implementation simpler.
kps|2 years ago
eatonphil|2 years ago
krut-patel|2 years ago
But how does it determine when it should write to disk? Does every write to a potentially OOM operation get preceeded by a check? Take the case of a HashAggregate. The DB clearly cannot know at compile time how many unique keys will be present in the hashtable; it needs to resize at runtime. So does that mean all the hashtables are still using some form of Bump/Arena allocators backed by the pre-allocated memory?
Maybe I should just read the source code :)