(no title)
manjunaths | 10 years ago
One of the biggest reasons for use of HPL is that many sizing considerations can be based off of the theoretical calculations.
But anyway this is very interesting. I definitely need to check this out.
manjunaths | 10 years ago
One of the biggest reasons for use of HPL is that many sizing considerations can be based off of the theoretical calculations.
But anyway this is very interesting. I definitely need to check this out.
jedbrown|10 years ago
HPGMG is representative of most structure-exploiting algorithms in that it does not have this abundance of flops, thus theoretical performance is actively constrained by both memory bandwidth and flop/s. We see many active constraints in practice; e.g., improving any of peak flop/s, memory bandwidth, network latency, or network bandwidth produces a tangible improvement in HPGMG performance. Depending on the fidelity of the performance model, these dimensions can be a fairly accurate predictor of performance, but ILP, compiler quality, on-node synchronization latency, cache sizes, and similar factors also matter (more for HPGMG-FE than HPGMG-FV).
I think it is actually quite undesirable for benchmark performance to be trivially computed from one parameter in machine provisioning. No computing center has a mission statement asking for a place on a benchmark ranking list (like Top500). Instead, they have a scientific or engineering mandate. Press releases tend to overemphasize the ranking and I think it is harmful to the science any time the benchmark takes precedence over the expected scientific workload. HPGMG is intended to be representative in the sense that if you build an "HPGMG Machine", you'll get a balanced, versatile machine that scientists and engineers in most disciplines will be happy with. I'd still rather the centers focus on their workload instead of HPGMG.