top | item 25837699

(no title)

bfrink | 5 years ago

In your example, you're only a penny off if you truncate 0.199999999999999996, rather than rounding (which is described in IEEE 754!). Here's a real simple example. Let's say your model depends on the average of the last three ticks. The last three ticks are $1.00, $1.00, and $2.00. Ok, what's the (exact!) average without being off by a fraction of a penny? This is the point - as soon as you start manipulating numbers in anything other than the most trivial way, you run into the dreaded floating point error, because that's how the real numbers work.

I am unaware of fixed precision types that have hardware optimized (other than FPGAs which are used for feed handling in HFT anyway). If you are modeling discrete things like Minimum Price Variations, then yes, use fixed precision, or even encode it in a way that saves space. But if you're numerically solving a partial differential equation, e.g., Black Scholes, it's difficult to see how fixed precision numbers are going to have an advantage.

discuss

order

hansvm|5 years ago

I think the point they're going after is that algorithmic trading behavior can be meaningfully sensitive to rounding errors (which seems plausible if you profit by amplifying tiny signals), so in the context of a simulation you might still have components like Black Scholes, but for the trades themselves (even simulated) you need to take more care or risk an excessive error.

In other words, they're describing a scenario where 1 in 10^14 error is potentially not tolerable because of some amplified discrete behavior.

bfrink|5 years ago

Agree - real world discrete things should be modeled as such. If MPV was $0.23, then model $0.23 increments - whether you use fixed point, or the cardinality of increments, who cares. But all the other math leading up to a discrete decision on the increment is almost certain to be best described with, and faster to implement in, floats.