top | item 31962293

(no title)

somerando7 | 3 years ago

> I was taught that to allocate memory was to summon death itself to ruin your performance. A single call to malloc() during any frame is likely to render your game unplayable. Any sort of allocations that needed to happen with any regularity required writing a custom, purpose-built allocator, usually either a fixed-size block allocator using a freelist, or a greedy allocator freed after the level ended.

Where do people get their opinions from? It seems like opinions now spread like memes - someone you respect/has done something in the world says it, you repeat it without verifying any of their points. It seems like gamedev has the highest "C++ bad and we should all program in C" commmunity out there.

If you want a good malloc impl just use tcmalloc or jemalloc and be done with it

discuss

order

Taniwha|3 years ago

I'm a sometimes real-time programmer (not games - sound, video and cable/satellite crypto) - malloc(), even in linux is an anathema to real-time coding (because deep in the malloc libraries are mutexes that can cause priority inversion) - if you want to avoid the sorts of heisenbugs that occur once a week and cause weird sound burbles you don't malloc on the fly - instead you pre-alloc from non-real-time code and run your own buffer lists

astrange|3 years ago

Mutexes shouldn't be able to cause priority inversion, there's enough info there to resolve the inversion unless the scheduler doesn't care to - i.e. you know the priority of every thread waiting on it. I guess I don’t know how the Linux scheduler works though.

But it's not safe to do anything with unbounded time on a realtime thread, and malloc takes unbounded time. You should also mlock() any large pieces of memory you're using, or at least touch them first, to avoid swapins.

morelisp|3 years ago

Aside from the performance implications being very real (even today, the best first step to micro-optimize is usually to kill/merge/right-size as many allocations as possible), up through ~2015 the dominant consoles still had very little memory and no easy way to compact it. Every single non-deterministic malloc was a small step towards death by fragmentation. (And every deterministic malloc would see major performance gains with no usability loss if converted to e.g. a per-frame bump allocator, so in practice any malloc you were doing was non-deterministic.)

charles_kaw|3 years ago

If this person was taught game dev any time before about 2005, that would have still been relevant knowledge. Doing a large malloc or causing paging could have slaughtered game execution, especially during streaming.

>If you want a good malloc impl just use tcmalloc or jemalloc and be done with it

This wasn't applicable until relatively recently.

jcelerier|3 years ago

> Doing a large malloc or causing paging could have slaughtered game execution, especially during streaming.

... it still does ? I had a case a year or so ago (on then-latest Linux / GCC / etc.) where a very sporadic allocation of 40-something bytes (very exactly, inserting a couple of int64 in an unordered_map at the wrong time) in a real-time thread was enough to go from "ok" to "unuseable"

syntheweave|3 years ago

If you go way back into the archives of the blog's author, probably about ten years now, you will find another memory-related rant on how multisampled VST instrument plugins should be simple and "just" need mmap.

I did, in fact, call him out on that. I did not know exactly how those plugins worked then(though I have a much better idea now) but I already knew that it couldn't be so easy. The actual VST devs I shared it with concurred.

But it looks like he's simply learned more ways of blaming his tools since then.

TonyTrapp|3 years ago

As always there is some truth to it - the problem of the MSVCRT malloc described in this blog article is the living proof of that - but these days it's definitely not a rule that will be true in 100% of cases. Modern allocators are really fast.