In binary floating point, 4.0 = 1.0×2^2, so the mantissa of the multiplicand will stay the same (being multiplied by 1.0) and the exponent will be incremented by 2. Scaling by exact integer powers of 2 preserves the relative accuracy of the input so long as you stay in range. The increase in absolute error is inherent to the limited number of mantissa bits and not introduced by any rounding from the multiplication; there are no additional bits.
constantcrying|1 year ago
This is about the approximation to pi not the approximation to float(atan(1))*4, it is exact (but irrelevant) for the later, for the former you loose two bits, so you have a 25% chance of correctly rounding towards pi.
adgjlsfhk1|1 year ago