(no title)
sdab | 10 years ago
Two links deep, I found [1]. The salient being "...floating point representation gives you extra precision for numbers near zero, this extra precision comes at the expense of precision for numbers further out"
1. http://www.pathengine.com/Contents/Overview/FundamentalConce...
Jyaif|10 years ago
The canonical example are RTS style games where there are hundreds of units, but only the actions of the players are sent to every other players. The result of the actions are computed independently by all the players. Having all the players be in sync requires determinism in the calculations, which floating point calculations do not provide.
arjunnarayan|10 years ago
The short answer is that accumulated calculations deviate in floating point arithmetic, because floats don't behave like mathematical abstract numbers. One easy way to see this is that operations on floats are not associative, especially when you're working on numbers that differ greatly in magnitude. So you don't have the guarantee that (A * (B * C)) = ((A * B) * C). This can end up with state slowly deviating and it all falling apart.
The reason for this is fairly easy to see: floats are represented as A * 2^B (A is the 'mantissa', B is the 'exponent'). In double-precision floating points, the mantissa is 52 bits, and the exponent has 11 bits (1 is left over for the sign bit). So if you have a really small number, say X = 1.125125123 * 2^-22, you can store it with a lot of precision. If you have a really large number, say Y = 1.236235423 * 2^30, you can also store this with a lot of precision. But now add the two together, and you're basically going to throw out the smaller number entirely, because you don't have enough mantissa bits. So essentially (X + Y) gives you Y. Now if you had a third number, say Z = 1.5 * 2^-22, and you were trying to do Y + X + Z, then order matters. Because (X+Z) together might "bump" it up to the point where they are within a multiplicative factor of (2^52) of Y, so they have some impact on the total. But Y + X throws out X, and then (Y + X) + Z throws out Z. So (Y + X) + Z ≠ Y + (X + Z)
P.S. I didn't check the math but that's the basic argument. I could be off by one or two orders of magnitude, but the point stands that floats behave wackily if you care about correctness.
haberman|10 years ago
epsylon|10 years ago
(Many games have shipped with fp in networked code. see http://gafferongames.com/networking-for-game-programmers/flo... and look for previous discussion here on HN, reddit...)
haberman|10 years ago
Dav3xor|10 years ago
strager|10 years ago
sdab|10 years ago
corysama|10 years ago
Similar to the PathEngine concept, when I compress 3D meshes to 6 bytes per xyz, I could use float16. But, there's no point to having most of the precision focused in the center of the mesh. The verts are usually fairly evenly distributed if not biased towards the outside edge of the bounding box.
Bit-level, cross-platform floating point determinism is harder than it should be. That and associativity is important in physics, financials, replays and distributed simulation.
The PlayStation1 and GameBoyAdvance didn't have FPU coprocessors. Software floating point was completely impractical. So, you pretty much had to use fixed point. To the point that the PS1 did have a fixed-point vector math coprocessor! :D