(no title)
scaredginger | 1 year ago
If I know what my data look like, I can choose an order of summation that reduces the error. I wouldn't want the compiler by default to assume associativity and introduce bugs. There's a reason this reordering is disabled by default
Sesse__|1 year ago
bee_rider|1 year ago
If you have written a library, the user will provide some inputs. While the rounding behavior of floating point operations is well defined, for arbitrary user input, you can’t usually guarantee that it’ll go either way. Therefore, you need to do the numerical analysis given users inputs from some range, if you want to be at all rigorous. This will give results with error bounds, not exact bit patterns.
If you want exact matches for your tests, maybe identify the bits that are essentially meaningless and write them to some arbitrary value.
Edit: that said I don’t think anybody particularly owns rigor on this front, given that most libraries don’t actually do the analysis, lol.