top | item 46845167

(no title)

DenisDolya | 29 days ago

Nice project! One question: decompression and page-fault handling also add latency. How do you avoid thrashing in practice? Also, for such low-level memory management, why C++ instead of C? C might give more predictable control without hidden runtime behavior.

discuss

order

el_dockerr|29 days ago

Thanks for the feedback! You hit the nail on the head regarding the trade-offs.

1. Latency & Thrashing: You are absolutely right, there is overhead (context switch + LZ4). The intended use case isn't high-frequency access to hot data, but rather increasing density for "warm/cold" data in memory-constrained environments (like embedded/IoT) where the alternative would be an OOM kill or swapping to slow flash storage.

To mitigate thrashing, I'm using a configurable LRU (Least Recently Used) strategy. If the working set fits within Physical Limit + Compression Ratio, it works smoothly. If the active working set exceeds physical RAM, it will indeed thrash—just like OS paging would. It's a trade-off: CPU cycles vs. Capacity.

2. Why C++? Valid point regarding runtime opacity. However, I chose C++ for RAII and Templates.

RAII: Managing the life-cycle of VirtualAlloc/VirtualFree and the exception handlers is much safer with destructors, ensuring we don't leak reserved pages or leave handlers dangling.

Templates: To integrate seamlessly with C++ containers (like std::vector), I needed to write a custom Allocator (GhostAllocator<T>). C++ templates make this zero-overhead abstraction possible, whereas in C, I'd have to rely on void* casting macros or manual memory management for generic structures.

I try to stick to a "C with Classes" subset + Templates, avoiding heavy runtime features where possible to keep it predictable.