top | item 44354362

(no title)

romesmoke | 8 months ago

I spent the most part of my PhD trying to convince people that allocators are not simple. But the not-simple part is design, specifically the strategy/policy part. In other words, minimizing fragmentation by adapting to linked program behavior. From what I know, no product-level allocator and very few research-level ones go beyond a "one size fits all" approach.

The reason why this is so is pragmatic: fancy strategies take extra compute time that is too expensive when one's sitting on the critical path. So what follows has a good chance of not bearing practical value but (i) there are setups where memory capacity is a bottleneck and (ii) hey, it's just a PhD anyway.

So what does a fancy strategy even look like? Let's look at the simplest non-trivial version of the problem. Say, we know the sizes and lifetimes of all objects in advance, and we want to pack them in as tight a contiguous address space as possible. Mathematicians call this the Dynamic Storage Allocation (DSA) problem and it's NP-complete, i.e., no known optimal algorithm exists for the general case. Deep learning compilers battling the AI memory wall have been returning to DSA lately (it's an old problem).

The implication of the above is that a real allocator, that is, one forced to work under uncertainty w.r.t. object sizes and lifetimes down the line, can always fail remarkably. So going for a general-purpose solution begs that you know what you're doing (one could argue that most programs out there are "well-behaved", and I would agree up until the point where this observation is turned into a universal law).

Nevertheless, it remains a fact that writing a toy allocator is both educational and soberingly not hard.

discuss

order

No comments yet.