top | item 42558924

(no title)

jcmeyrignac | 1 year ago

A better value could be 0x7EEEEBB3, as in: https://github.com/parallella/pal/blob/bd9389f14fe16db4d9630...

discuss

order

dahart|1 year ago

EDIT: after double-checking my work, I realized I have a better bound on maximum error, but not a better average error. So, the magic number depends on the goal or metric, but mean relative error seems reasonable. Leaving my original comment here, but note the big caveat that I’m half wrong.

One can do better still - 0x7EF311BC is a near optimal value at least for inputs in the range of [0.001 .. 1000].

The simple explanation here is:

The post’s number 0x7EFFFFFF results in an approximation that is always equal to or greater than 1/x. The value 0x7EEEEBB3 is better, but it’s less than 1/x around 2/3rds of the time. My number 0x7EF311BC appears to be as well balanced as you can get, half the time greater and half the time less than 1/x.

To find this number, I have a Jupyter notebook that plots the maximum absolute value of relative error over a range of inputs, for a range of magic constants. Once it’s setup, it’s pretty easy to manually binary search and find the minimum. The plot of max error looks like a big “V”. (Edit while the plot of mean error looks like a big “U” near the minimum.

The optimal number does depend on the input range, and using a different range or allowing all finite floats will change where the optimal magic value is. The optimal magic number will also change if you add one or more Newton iterations, like in that github snippet (and also seen in the ‘quake trick’ code).

PPS maybe 0x7EF0F7D0 is a pretty good candidate for minimizing the average relative error…?

mbitsnbites|1 year ago

Your suggestion got me intrigued. I have a program that does an exhaustive check for maximum and average error, so I'll give your numbers a spin.