The negative performance impact of GC in performance-engineered code is neither small nor controversial, it is mechanical consequence of the architecture choices available. Explicit locality and schedule control makes a big difference on modern silicon. Especially for software that is expressly engineered for maximum performance, the GC equivalent won't be particularly close to a non-GC implementation. Some important optimization techniques in GC languages are about effectively disabling the GC.
While some applications written in C++ are not performance sensitive, performance tends to be a major objective when choosing C++ for many applications.
When people complain about "negative performance impact of GC", often they're actually bothered by badly designed languages like Java that force heap-allocation of almost everything.
I think this might have been fixed in latest versions of Java, though, not sure if value types are already in the language or just coming soon.
Aside from that, it's my understanding that GC can be both a blessing and a curse for performance (throughput), that is, an advanced-enough GC implementation should (theoretically?) be faster than manual memory management.
My comment is about the thinking behind making this decision, C++ or not.
It wasn't "is it speculative that GC will add a cost?" or something like that.
I wonder how much of the thinking that leads one to conclude "I need so much performance here that I can't afford a managed language", for example, is real carefully thought vs. speculative.
In my experience 99% speculative and WRONG.
Who said: "early optimization is the root of all evil"? :)
Today more and more is possible to have a GC without terrible performance issues. Some weeks ago I read an article here in HN, about LISP used for safety critical systems. That bad fame of GC comes from the early versions of Java... but I've been using GC languages a LOT, and never had those "stop the world" moments.
The expression is "premature optimization". And, Donald Knuth.
GC overhead is always hard to measure except end-to-end, because it is distributed over everything else that happens. Cache misses, TLB shoot-downs. Mitigations are very difficult to place.
Practically, you usually just have to settle for lower performance, and most people do. Double or triple your core count and memory allocation, and bull ahead.
Not my bailiwick but I feel like early Java's problem was a combination of everything not a simple primitive is an object that goes on the heap plus a GC optimized for batch throughput vs latency. Bonus it's Java all the way down to the bitter end.
I'm with you. One should look at latency requirements and the ratio of profit vs server costs when making a decision. AKA when your product generates $250k/month, you're paying three programmers $40k/month, and your AWS bill is $500/month isn't the time to try and shave pennies.
jandrewrogers|3 years ago
While some applications written in C++ are not performance sensitive, performance tends to be a major objective when choosing C++ for many applications.
nsajko|3 years ago
I think this might have been fixed in latest versions of Java, though, not sure if value types are already in the language or just coming soon.
Aside from that, it's my understanding that GC can be both a blessing and a curse for performance (throughput), that is, an advanced-enough GC implementation should (theoretically?) be faster than manual memory management.
phao|3 years ago
My comment is about the thinking behind making this decision, C++ or not. It wasn't "is it speculative that GC will add a cost?" or something like that.
I wonder how much of the thinking that leads one to conclude "I need so much performance here that I can't afford a managed language", for example, is real carefully thought vs. speculative.
f1shy|3 years ago
ncmncm|3 years ago
GC overhead is always hard to measure except end-to-end, because it is distributed over everything else that happens. Cache misses, TLB shoot-downs. Mitigations are very difficult to place.
Practically, you usually just have to settle for lower performance, and most people do. Double or triple your core count and memory allocation, and bull ahead.
Gibbon1|3 years ago
I'm with you. One should look at latency requirements and the ratio of profit vs server costs when making a decision. AKA when your product generates $250k/month, you're paying three programmers $40k/month, and your AWS bill is $500/month isn't the time to try and shave pennies.