top | item 25835270

(no title)

bfrink | 5 years ago

This is a common misapprehension. Yes, fixed precision is great for accounting (like your bank statement) but when you're making a predictive model of asset prices (or higher moments of price distributions) being off by 1 in 10^14 is not important (because your model isn't that precise anyway!) and the performance you get from dedicated floating part hardware is well worth the "loss" of precision.

This wholesale dismissal of floating point for "financial" systems ignores the real business needs which might point you toward fixed or floating point numbers. Always ask yourself - do you need to know this number to better than 1 in 10^14? Are you going to find its square root at some point? Also remember that storing fixed point numbers usually takes more bytes than double-precision floats.

discuss

order

proverbialbunny|5 years ago

>Always ask yourself - do you need to know this number to better than 1 in 10^14?

Yes, because over time thousands of multiplies and divides means you will get more than a penny off. The more high frequency the more floats become a problem.

hansvm|5 years ago

Floating point multiplication and division are generally much safer in terms of precision loss than addition or subtraction, and on the flip side you could easily be more than a penny off with zero operations if the quantities involved were large enough.

Quibbles aside, they're not suggesting doing accounting with floats. E.g., suppose you want to estimate the expected value of an option. You'll have a model that attempts to describe that option's behavior (e.g. Black Scholes), and you want to evaluate that model with a certain set of parameters. The model itself is imperfect, and given the transcendentals involved even if it were flawless there would be a guaranteed loss of precision when attempting to clamp a real option to its predicted expected value. The model is a tool that guides decisions, but nobody really cares if it's off by a little bit because there are a ton of other error sources anyway. 1 in 10^14 is more than good enough.

Edit: Unless you're just suggesting that people should do a little numerical analysis and be cognizant of the total error in a model?