(no title)
d3ckard | 2 months ago
We used to have very little memory, so we developed many tricks to handle it.
Now we have all the memory we need, but tricks remained. They are now more harmful than helpful.
Interestingly, embedded programming has a reputation for stability and AFAIK game development is also more and more about avoiding dynamic allocation.
mikepurvis|2 months ago
Under these conditions, you do need a fair bit of dynamism, but the deallocations can generally be in big batches rather than piecemeal, so it's a good fit for slab-type systems.
array_key_first|2 months ago
Also, is easier to refactor if you do the typical GC allocation patterns. Because you have 1 million different lifetimes and nobody actually knows them, except the GC kind of, it doesn't matter if you dramatically move stuff around. That has pros and cons, I think. It makes it very unclear who is actually using what and why, but it does mean you can change code quickly.
badsectoracula|2 months ago
That might have been the case ~30 years ago on platforms like the Gameboy (PC games were already starting to use C++ and higher level frameworks) but certainly not today. Pretty much all modern game engines allocate and deallocate stuff all the time. UE5's core design with its UObject system relies on allocations pretty much everywhere (and even in cases where you do not have to use it, the existing APIs still force allocations anyway) and of course Unity using C# as a gameplay language means you get allocations all over the place too.
yeasku|2 months ago
Aka you minimize allocations in gameplay.