This is because naive can-do-it-all allocations in C/C++ can be expensive, not because allocations are inherently expensive. In C/C++, you have:
1. A call of a library function that typically cannot be inline.
2. Analysis of the object size in order to pick the right pool or a more general allocator to allocate from.
3. A traditional malloc() implementation needs to also use a global lock; thread-local allocators are comparatively rare.
4. For large objects, a complex first-fit/best-fit algorithm with potentially high complexity has to be used.
Modern GCs typically use a bump allocator, which is an arena allocator in all but name. In OCaml or on the JVM, an allocation is a pointer increment and comparison.
Even without bump allocators, it's easy for a GC implementation to automatically turn most allocations into pool allocations that can be inlined.
Also: much as people love to talk about video games, video games with such strict performance requirements are not only just a part of the video game industry, they are a tiny part of the software industry.
In OCaml or on the JVM, an allocation is a pointer increment and comparison.
That's true, but if (hopefully rarely) the object turns out to be needed later, it has to be copied to another heap, and that takes time and memory. Pointers need to be redirected and that takes a little work too.
Bump allocators are definitely a huge win, as good as anything you can do in C/C++ and much more convenient for the programmer, but they're not a completely free lunch.
rbehrends|8 years ago
1. A call of a library function that typically cannot be inline.
2. Analysis of the object size in order to pick the right pool or a more general allocator to allocate from.
3. A traditional malloc() implementation needs to also use a global lock; thread-local allocators are comparatively rare.
4. For large objects, a complex first-fit/best-fit algorithm with potentially high complexity has to be used.
Modern GCs typically use a bump allocator, which is an arena allocator in all but name. In OCaml or on the JVM, an allocation is a pointer increment and comparison.
Even without bump allocators, it's easy for a GC implementation to automatically turn most allocations into pool allocations that can be inlined.
Also: much as people love to talk about video games, video games with such strict performance requirements are not only just a part of the video game industry, they are a tiny part of the software industry.
iainmerrick|8 years ago
That's true, but if (hopefully rarely) the object turns out to be needed later, it has to be copied to another heap, and that takes time and memory. Pointers need to be redirected and that takes a little work too.
Bump allocators are definitely a huge win, as good as anything you can do in C/C++ and much more convenient for the programmer, but they're not a completely free lunch.