Do you do have any plans to better distinguish between noise and regressions? I run a similar performance testing infrastructure for Chakra, and found that comparing against the previous run makes the results noisy. That means more manual review of results, which gets old fast.
What I do now is run a script that averages results from the preceding 10 runs and compares that to the average of the following 5 runs to see if the regression is consistent or anomalous. If the regression is consistent, then the script automatically files a bug in our tracker.
There is still some noise in the results, but it cuts down on those one-off issues.
I talked about this a little bit in the meetup talk I linked and I intend to write a bit more about this, but I'll try to summarize.
There are kind of three prongs here:
First, using criterion.rs does a ton for giving us more stable metrics. It handles things like warmups, accounting for obvious statistical outliers in the sample runs, postprocessing the raw data to provide more meaningful statistics, etc. I'm currently using a fork of the library which additionally does this recording and processing of a variety of metrics we get from `perf_event_open` on Linux but which I assume you could get through ETW or Intel/AMD's userspace PMC libraries.
Second, I try to provide a stable environment so that results over long time deltas are comparable and we can store the data for offline analysis rather than having to checkout recent prior commits and compare the current PR/nightly/etc against them. Prior to the current deployment I was using cgroups to move potentially competing processes off of the benchmark cores which produced some nice results. However I had some issues with the version of the cpuset utility I installed on the debian machines and I haven't sorted it out yet.
Third, we do a few things with the time-series-esque data we get from measuring multiple toolchains to try and only surface relevant results. Those are mostly in src/analysis.rs if you want to poke around. It basically boils down to calculating the Kernel Density Estimate of the current toolchain's value being from the same population (I hope these terms are halfway correct) as all prior toolchains' value.
I hope that with a few extensions to the above we can get close to being reliable enough to include in early PR feedback, but I think the likely best case scenario is a manually invoked bot on PRs followed by me and a few other people triaging the regressions surfaced by the tool after something merges.
Here are a few issues that I think will help improve this too:
Do you mean 10 preceding versions, or 10 repeated timings of the same version? If you repeat the timing for the each version many times, why is that not enough to smooth out the noise?
For those wanting to do similar tracking of benchmarks across commits, I've found Airspeed Velocity to be quite nice ( https://readthedocs.org/projects/asv ). It allows (but doesn't require) benchmarks to be kept separate to the project's repo, can track different configurations separately (e.g. using alternative compilers, dependencies, flags, etc.), keeps results from different machines separated, generates JSON data and HTML reports, performs step detection to find regressions, etc.
It was intended for use with Python (virtualenv or anaconda), but I created a plugin ( http://chriswarbo.net/projects/nixos/asv_benchmarking.html ) which allows using Nix instead, so we can provide any commands/tools/build-products we like in the benchmarking environment (so far I've used it successfully with projects written in Racket and Haskell).
How do you determine baseline load of the test machine in order to qualify the correctness of the benchmark?
Assuming the compiling, and testing is done in the cloud how do you ensure the target platform (processor) doesn't change, and that you aren't being subjected to neighbors who are stealing RAM bandwidth, or CPU cache resources from your VM and impacting the results?
Each benchmark result is only compared against values from running on literally the same machine, actually. I agree that good results here would be extremely difficult to produce on virtualized infra, so I rented a few cheap dedicated servers from Hetzner. I'm glad that I decided to pin results to a single machine, because even between these identically binned machines from Hetzner I saw 2-4% variance between them when I ran some phoronix benches to compare.
I go into a little bit of detail on this in the talk I link to towards the bottom of the post, here's a direct link for convenience: https://www.youtube.com/watch?v=gSFTbJKScU0.
> Common excuses people give when they regress performance are, “But the new way is cleaner!” or “The new way is more correct.” We don’t care. No performance regressions are allowed, regardless of the reason. There is no justification for regressing performance. None.
This seems a bit extreme. Would they accept a regression to fix a critical security vulnerability? Code can be infinitely fast if it need not be correct.
I use this word the way we did when I worked as a PC technician and help desker, where there's a lot of automation but then we sneak a bit of manual labor in to make it actually useful. Like how user accounts would be maintained in the correct state automagically.
Not yet, but I am tracking this as a desired feature: https://github.com/anp/lolbench/issues/9. The benchmark plan generation, storage keys, and results presentation will at a minimum need to be extended to support a matrix of inputs to each benchmark function. Right now there are a number of implicit assumptions that each benchmark function is tracked as a single series of results.
Can I suggest you consider putting https://github.com/anp/lolbench/issues/1 in to the README.md file, so people can easily see where to look for some TODO items?
Surely JDK devs have a corpus of projects they test releases against, but JVM devs tend to not do the microbenchmarks this would need very often. So in general a corpus of centralized JMH benchmarks would probably have more value than referencing them from other projects. I'm sure an entity could offer this service if, e.g., projects with JMH benchmarks invoked in a common form from maven central or github or whatever were submitted to a central curator but not sure who would want to curate that.
MikeHolman|7 years ago
What I do now is run a script that averages results from the preceding 10 runs and compares that to the average of the following 5 runs to see if the regression is consistent or anomalous. If the regression is consistent, then the script automatically files a bug in our tracker.
There is still some noise in the results, but it cuts down on those one-off issues.
anp|7 years ago
There are kind of three prongs here:
First, using criterion.rs does a ton for giving us more stable metrics. It handles things like warmups, accounting for obvious statistical outliers in the sample runs, postprocessing the raw data to provide more meaningful statistics, etc. I'm currently using a fork of the library which additionally does this recording and processing of a variety of metrics we get from `perf_event_open` on Linux but which I assume you could get through ETW or Intel/AMD's userspace PMC libraries.
Second, I try to provide a stable environment so that results over long time deltas are comparable and we can store the data for offline analysis rather than having to checkout recent prior commits and compare the current PR/nightly/etc against them. Prior to the current deployment I was using cgroups to move potentially competing processes off of the benchmark cores which produced some nice results. However I had some issues with the version of the cpuset utility I installed on the debian machines and I haven't sorted it out yet.
Third, we do a few things with the time-series-esque data we get from measuring multiple toolchains to try and only surface relevant results. Those are mostly in src/analysis.rs if you want to poke around. It basically boils down to calculating the Kernel Density Estimate of the current toolchain's value being from the same population (I hope these terms are halfway correct) as all prior toolchains' value.
I hope that with a few extensions to the above we can get close to being reliable enough to include in early PR feedback, but I think the likely best case scenario is a manually invoked bot on PRs followed by me and a few other people triaging the regressions surfaced by the tool after something merges.
Here are a few issues that I think will help improve this too:
https://github.com/anp/lolbench/issues/20
https://github.com/anp/lolbench/issues/17
https://github.com/anp/lolbench/issues/14
mkl|7 years ago
chriswarbo|7 years ago
It was intended for use with Python (virtualenv or anaconda), but I created a plugin ( http://chriswarbo.net/projects/nixos/asv_benchmarking.html ) which allows using Nix instead, so we can provide any commands/tools/build-products we like in the benchmarking environment (so far I've used it successfully with projects written in Racket and Haskell).
anp|7 years ago
valarauca1|7 years ago
Assuming the compiling, and testing is done in the cloud how do you ensure the target platform (processor) doesn't change, and that you aren't being subjected to neighbors who are stealing RAM bandwidth, or CPU cache resources from your VM and impacting the results?
anp|7 years ago
I go into a little bit of detail on this in the talk I link to towards the bottom of the post, here's a direct link for convenience: https://www.youtube.com/watch?v=gSFTbJKScU0.
panic|7 years ago
twtw|7 years ago
This seems a bit extreme. Would they accept a regression to fix a critical security vulnerability? Code can be infinitely fast if it need not be correct.
habitue|7 years ago
How long do we expect it to take before "automagically" completely replaces "automatically" in English?
I am guessing less than a decade to go now
anp|7 years ago
hsivonen|7 years ago
Do you track opt_level=2 (the Firefox Rust opt level) in addition to the default opt_level=3?
anp|7 years ago
Not yet, but I am tracking this as a desired feature: https://github.com/anp/lolbench/issues/9. The benchmark plan generation, storage keys, and results presentation will at a minimum need to be extended to support a matrix of inputs to each benchmark function. Right now there are a number of implicit assumptions that each benchmark function is tracked as a single series of results.
thsowers|7 years ago
Twirrim|7 years ago
dikaiosune|7 years ago
awake|7 years ago
kodablah|7 years ago
dralley|7 years ago