top | item 44923008

(no title)

EdwardCoffin | 6 months ago

It's unmentioned in the article, but Trevor Blackwell's PhD thesis, Applications of Randomness in System Performance Measurement [1] was advocating this in 1998:

This thesis presents and analyzes a simple principle for building systems: that there should be a random component in all arbitrary decisions. If no randomness is used, system performance can vary widely and unpredictably due to small changes in the system workload or configuration. This makes measurements hard to reproduce and less meaningful as predictors of performance that could be expected in similar situations.

[1] https://tlb.org/docs/thesis.pdf

discuss

order

hinkley|6 months ago

All else being equal, I like to have either a prime number of servers or a prime number of inflight requests per server. I’m always slightly afraid someone is going to send a batch of requests or tune a benchmark to be run a number of time that divides exactly evenly into the system parallelism and we won’t be testing what we think we are testing due to accidental locality of reference that doesn’t show up in the general population. Not unlike how you get uneven gear wear if you mesh two gears that have a large common denominator of tooth count, like a ratio of 3:1 or 2:3, so the same teeth keep meeting all the time.

But all else is seldom equal and Random 2 works as well or better.