top | item 42573188

(no title)

mbitsnbites | 1 year ago

Given my search criteria, the optimal magic number turns out to be: 0x7ef311c2

  Initial approximation:
    Good bits min: 4
    Good bits avg: 5.242649912834
    Error max: 0.0505102872849 (4.30728 bits)
    Error avg: 0.0327344845327 (4.93304 bits)

  1 NR step:
    Good bits min: 8
    Good bits avg: 10.642581939697
    Error max: 0.00255139507338 (8.61450 bits)
    Error avg: 0.00132373889641 (9.56117 bits)

  2 NR steps:
    Good bits min: 17
    Good bits avg: 19.922843217850
    Error max: 6.62494557693e-06 (17.20366 bits)
    Error avg: 2.62858584054e-06 (18.53728 bits)

  3 NR steps:
    Good bits min: 23
    Good bits avg: 23.674004554749
    Error max: 1.19249960972e-07 (22.99951 bits)
    Error avg: 3.44158509521e-08 (24.79235 bits)
Here, "good bits" is 24 minus the number of trailing non-zero-bits in the integer difference between the approximation and the correct value, looking at the IEEE 754 binary representation (if that makes sense).

Also, for the NR steps I used double precision for the inner (2.0 - x * y) part, then rounded to single precision, to simulate FMA, but single precision for the outer multiplication.

discuss

order

dahart|1 year ago

Ah very nice, I was close with using max error - 0.05051 is the same number I got. Pretty sure 0x7ef311c2 came up for me at least a few times as I was fiddling with parameters. Is this using minimum good bits as the deciding criteria, or is it the best overall number using one of the averages and also 1-3 NR steps? Did you limit the input range, or use all finite floats? Having the min/avg error in bits is nice, it’s more intuitive than relative error.

I like the FMA simulation, that’s smart; I didn’t think about it. I did my search in Python. I don’t have it in front of me right now, and off the top of my head I’m not even sure whether my NR steps are in Python precision or fp32. :P My posts in this thread were with NR turned off, I wanted to find the best raw approximation and noticed I got a different magic number when using refinement. It really is an amazing trick, right? Even knowing how it works it still looks like magic when plotting the result.

Thanks for the update!

BTW I was also fiddling with another possible trick that is specific to reciprocal. I suspect you can simply negate all the bits except the sign and get something that’s a decent starting point for Newton iters, though it’s a much worse approximation than the subtraction. So maybe (x ^ 0x7fffffff). Not sure if negating the mantissa helps or if it’s better to negate only the exponent. I haven’t had time to analyze it properly yet, and I don’t know of any cases where it would be preferred, but I still think it’s another interesting/cute observation about how fp32 bits are stored.

mbitsnbites|1 year ago

When measuring the errors I exhaustively iterate over all possible floats in the range [1, 2), by enumerating all IEEE 754 single precision representations in that range. That's "only" 2^23 numbers, so perfectly doable.

My selection criteria was abit complex, but something like this:

1. Maximize number of accurate bits in the approximation.

2. Same in NR step 1, then NR step 2 etc.

3. Minimize the max error in the approximation, and then the avg ertor in the approximation.

4. Same for NR step 1, 2, ...