top | item 39427156

(no title)

098799 | 2 years ago

Nothing weird about it. It should be obvious that subtracting two floats that are very close to each other results in a loss of numerical precision:

1.000000003456e0 - 1.000000002345e0 = 0.000000001111e0 = 1.111numericalnoise e-9

It's exactly the same issue here. `math.exp(1e-15)` is `1.000000000000001`. If you subtract 1, you get 1 significant digit and numerical noise.

discuss

order

nneonneo|2 years ago

The weird thing here is that the only change is making the denominator ln(exp(x)) instead of x. Catastrophic cancellation is still happening in the numerator (it’s still exp(x)-1), and the denominator winds up being some really tiny number.

It’s just that, due to the quirks of the floating point calculations involved, the numerator and denominator wind up being nearly the same noisy approximation to x, whereas in the original calculation that wasn’t true.

kolinko|2 years ago

That's not what the post says if I understand correctly - the post explains why in certain situations the "noise" disappears, and in other cases it doesn't.

See comparison between f and g functions.

098799|2 years ago

I see! yes, the magic is you can cancel the noise by repeating it twice:

``` In [1]: math.exp(1e-15)-1 Out[1]: 1.1102230246251565e-15

In [2]: math.log(math.exp(1e-15)) Out[2]: 1.110223024625156e-15 ```

risky business though, I imagine it's implementation dependent