If you read the lockdownskeptics cite, "hard to debug" is not the problem. Non-determinism in the output is the issue, and if this is indeed the case, why would anyone trust the results? Do a bunch of runs and average is not a good answer.
It's really not enough to say. "Do a bunch of runs and average" is exactly how quite a bit of simulation software works. In this case, a small number of random outcomes early in the "pandemic" will have a large impact on the outcome.
Of course, this kind of uncertainty needs to be dealt with, and that may have been done by running the simulation code we are presented with multiple times. It may be necessary to read both the code and the associated papers to judge this correctly.
The non-determinism that page talks about is coming from bugs like memory corruptions, floating point inaccuracies, initialisation order bugs. It's not an intentional part of the model. Averaging corrupted data is meaningless, it doesn't magically fix the corruption.
I really wonder what it would take for some people to lose faith in epidemiology. Has this field ever predicted an epidemic correctly? Is there any level of bugginess that would yield the output of these teams unacceptable, to them?
Exactly, the individual runs aren't actually the "deliverable" of the code; rather it is the average of many runs that represent the real result of the code.
Nothing presented clearly compromised the (supposed) reliability of the distributions produced, so the impact of these bugs beyond the inconvenience they add is unclear.
To be clear, it is certainly not true that removing these bugs will somehow prove that the model and its inputs themselves are correct.
creato|5 years ago
Of course, this kind of uncertainty needs to be dealt with, and that may have been done by running the simulation code we are presented with multiple times. It may be necessary to read both the code and the associated papers to judge this correctly.
thu2111|5 years ago
I really wonder what it would take for some people to lose faith in epidemiology. Has this field ever predicted an epidemic correctly? Is there any level of bugginess that would yield the output of these teams unacceptable, to them?
SiempreViernes|5 years ago
Nothing presented clearly compromised the (supposed) reliability of the distributions produced, so the impact of these bugs beyond the inconvenience they add is unclear.
To be clear, it is certainly not true that removing these bugs will somehow prove that the model and its inputs themselves are correct.