(no title)
alblue | 4 years ago
While Rust may have less issues than other languages in this regard, it doesn’t change the fact that destruction of a “thing” can result in a transitive set of “other things” being destroyed as well, and depending on the state of the program, this set could have a pretty high bound.
Even in a “predictable” language like Rust, a data entry containing a map like structure is going to have a different destruction time if the map is empty vs if it is full.
The other aspect of “predictable” delays in a program completely ignores the runtime issues that can happen at the OS layer, such as swapping/page cache invalidations/migrations between cores of numa regions etc. so it is rarely the case that any program in any language is going to have predictable behaviour. Even trivial things, like the number/size of environment variables user at program startup can have a performance delta.
Anyway, the point is that when programmers talk about “predictable” behaviour in a program, what they generally mean is “invisible performance problems” and so it gets swept under the carpet.
That’s not to say that GC doesn’t introduce variance into measurements, but it is not the only thing that causes variance and many of the good GCs are capable of using multiple cores and can avoid interrupting the program runtime without significant overhead, although obviously tracing collectors will increase pressure on a memory system in a way that explicit/automatic memory management does not.
Xylakant|4 years ago
This is generally an interesting feature of garbage collected languages, and it's often more efficient than manual memory management - but it does come with downsides.
> Even in a “predictable” language like Rust, a data entry containing a map like structure is going to have a different destruction time if the map is empty vs if it is full.
"Predictable" does not mean "always the same". It means that given a map with the same size and fill level, destroying it will take the same time. What's more important is that the point in the program where the destruction happens can be controlled - and, if necessary, moved out of the critical path of something that requires response within a certain bound. This may introduces an overall inefficiency in the program, but if you have software that needs to fulfill certain time bounds, that's the tradeoff you choose. Doing so in a garbage collected language is much harder.
And rust targets environments where (near)-realtime behavior and predictable performance is important, or where garbage collection is not truly possible since allocation isn't even possible (think embedded systems).
> Anyway, the point is that when programmers talk about “predictable” behaviour in a program, what they generally mean is “invisible performance problems” and so it gets swept under the carpet.
I don't see how this is a counterpoint to what I said - programmers blaming other things for performance problem has been a thing like forever.