top | item 14279499

(no title)

SFJulie | 8 years ago

Memory fragmentation due to dynamic non fixed size data structure and multithreading is an old foe. That may not be fixable in c/c++

Worker A allocates dynamic stuff. Algo take a segment (0+sof(str)(ofA) + n) Work B Allocates to create same kind of data structure (fragment of a JSON) [ofA, OfB] Wk A resume allocating, boundary of [0, ofA] exceeded, no free contiguous space up or down [Ofb, OfC] allocated Wk C enters wants to alloc, but sizeof(string) make it bigger than [0, OfA] so [ofD, ofE] asked .... and the more concurrent workers the more interleaving of memory allocation go on with fragmented memory.

Since malloc are costly the problem known, a complex allocator was created with pools of slab and else, probably having one edge case, very hard to trigger having phD proven really complex heuristic.

CPU power increase, more loads more workers, interleaving comes in, edge case gets triggered.

And C/C++ makes fun of fortran with its fixed size data structures embracing any new arbitrary size arbitrary depth data structure for the convenience of skipping a costly waterfall model before delivering a feature or a change in the data structure and avoiding bike shedding in committees.

Human want to work in a way that is more agile than what computers are under the hood.

Alternative:

Always allocated fixed size memory range for data handling, and make sure it will be enough. When doing REST make sure you have an upper bound, use paging/cursors, which require FSM, have all FP programmers say mutable are bads, sysadmin say that FSM are a pain to handle when HA is required, and CFO saying SLA will not be reached and business model is trashed, and REST fans saying that REST is dead when stateful.

Well REST is a bad idea.

discuss

order

hackits|8 years ago

The good thing about fixed size memory ranges is it decreases debugging time. It also allows deterministic behaviour for your modules.