top | item 44145219

(no title)

T0Bi | 9 months ago

> Using some kind of fixed point math would be entirely inappropriate for most HFT or scientific computing applications.

May I ask why? (generally curious)

discuss

order

jcranmer|9 months ago

For starters, it's giving up a lot of performance, since fixed-point isn't accelerated by hardware like floating-point is.

rendaw|9 months ago

Isn't fixed point just integer?

Athas|9 months ago

The problem with fixed point is in its, well, fixed point. You assign a fixed number of bits to the fractional part of the number. This gives you the same absolute precision everywhere, but the relative precision (distance to the next highest or lowest number) is worse for small numbers - which is a problem, because those tend to be pretty important. It's just overall a less efficient use of the bit encoding space (not just performance-wise, but also in the accuracy of the results you get back). Remember that fixed point does not mean absence of rounding errors, and if you use binary fixed point, you still cannot represent many decimal fractions such as 0.1.

anthk|9 months ago

With fixed point you either scale it up or use rationals.

osigurdson|9 months ago

Fundamentally there is uncertainty associated with any physical measurement which is usually proportional to the magnitude being measured. As long as floating point is << this uncertainty results are equally predictive. Floating point numbers bake these assumptions in.