top | item 33195968

(no title)

_vvhw | 3 years ago

Static allocation has also made TigerBeetle's code cleaner, by eliminating branching at call sites where before a message might not always have been available. With static allocation, there's no branch because a message is always guaranteed to be available.

It's also made TigerBeetle's code more reliable, because tests can assert that limits are never exceeded. This has detected rare leaks that might otherwise have only been detected in production.

discuss

order

Twisol|3 years ago

I think the grandparent was saying that dynamic allocation is a form of optimization, which also makes the code harder to follow. Your anecdote seems exactly in line with their suggestion.

_vvhw|3 years ago

Ah, missed that, thanks! I've updated the comment.

pierrebai|3 years ago

I don't get your characterization. The statically sized arrays can never be fully used? That is doubtful. I suppose that if data X and only ever used 1:1 with data Y and you got a Y, then you are guaranteed to have an X, but that requires complete static analysis of the code base and certainty that no future code change will ever affect that 1:1 relationship. That seems fragile.

I mean it's not like memory exhaustion is a common occurrence nowadays. These kind of compromises, where one trades an almost non-existent failure mode for a fragile assumption-ridden code base does not sounds like a wise choice.

I mean, if the particular process is so tightly-integrated that you can know that you have a 1:1 relationship between X and Y, you can also tie those together with dynamic allocations. I find it hard to believe that you can easily statically show that X and Y allocations are tied in a static allocation scheme but that it would not be so under dynamic allocation?