top | item 10873204

Comparing Java GC Collectors

42 points| ingve | 10 years ago |eivindw.github.io | reply

18 comments

order
[+] hyperpape|10 years ago|reply
I know that profiling using VisualVM (or most other profilers) can substantially change the garbage collection behavior of an application (http://psy-lob-saw.blogspot.com/2014/12/the-escape-of-arrayl...).

I don't know if using VisualVM in the way this article does has the same effect--sounds like the author is not doing any actual profiling, so it may not be an issue. Additionally, since this is a synthetic benchmark, you might argue that it's not important. Still, I'd tend to think it makes more sense to use OS level tools to monitor the CPU usage and avoid changing the behavior of the JVM.

[+] dpratt|10 years ago|reply
I concur. These results would have been better if they'd been generated using the native GC logging format and then run through an analysis tool.

Similarly, the article mentions that the synthetic benchmark was run for only a minute. I am heavily suspicious of this - any results you get here will be useless until HotSpot has had a bit of time to optimize the running code. I heavily suspect that the G1 throughput would have been significantly higher if the benchmark had been allowed to run longer.

[+] kilink|10 years ago|reply
There are so many other factors that ultimately determine GC performance. The GC must be tuned based on the workload. For instance, the out of the box defaults for things like newRatio [1] are not ideal for web services with lots of short-lived objects.

[1] https://blogs.oracle.com/jonthecollector/entry/the_second_mo...

[+] eivindw|10 years ago|reply
Yes, choice of collector is just one of many tuning options. Point is you need to log and measure your own app for every change you make.
[+] eivindw|10 years ago|reply
Main point of the article was really to show how to use gc logging correctly - calculating max, avg pause-times plus throughput. The VisualJ stuff was just added for fun. I ran without it - and the numbers are the same. Also running for a minute was just to get images to compare.

In a real application you just need gc logging and a tool to calculate metrics. Point is that you have to measure - my numbers are only valid for a 1 min running testapp..

[+] joncrocks|10 years ago|reply
Article is trash.

There are different collectors. They are different.

No really?

[+] nomadlogic|10 years ago|reply
also i was unable to find a reference to the version/build of java the author was using - thus negating any numbers/benchmarks presented.
[+] Scarbutt|10 years ago|reply
What is the default GC for the 1.8 JVM?
[+] iheartmemcache|10 years ago|reply
It depends on like, a billion different things. Odds are if you havent fed any arguments to your VM it'll be concurrent mark-and-sweep. It depends on a ton of things like RAM avaialbility, percentages of ideal G0->G1 and G1->old [and if you've even defined your 'ideal' otherwise it defaults to around 20%], the predictability of your heap access and/or growth, if you're using/reusing object pools and the habits of that, and I'm sure a lot of different things. That's exactly why dpratt and I called foul-ball on this 'analysis' or whatever (because the JVM was _designed_ to warm up and tune itself). Rule of thumb - G1 is great for throughput if you have a ton of extra RAM and want that extra oompf (i.e., Bank of America's probably running WebSphere in a straight G1 configuration). On your dev machines unless you fed some weird params to the VM, you're almost certainly running parallel or concurrent mark and sweep. Unless your'e running Java on a mobile device you're 99.99% not running in Incremental, and the only time the JVM would defer from concurrent m&s to the incremental m&s is if you had a lot of RAM and only one processor available (I don't think I've ever seen that mode in my life).
[+] hyperpape|10 years ago|reply
Parallel. G1 will be default in 1.9.

Edit: the part about 1.8 is wrong. 1.8 doesn't have a standard default.