top | item 10061261

(no title)

sdab | 10 years ago

I was having trouble understanding why fixed-point arithmetic was particularly useful in game development. In fact, I was curious why fixed point representation was more useful than floating point at all...

Two links deep, I found [1]. The salient being "...floating point representation gives you extra precision for numbers near zero, this extra precision comes at the expense of precision for numbers further out"

1. http://www.pathengine.com/Contents/Overview/FundamentalConce...

discuss

order

Jyaif|10 years ago

The most useful feature of fixed point calculations is its deterministic nature. Determinism is necessary when multiple parties need to be in the same state, with the state being too large to be exchanged between the parties.

The canonical example are RTS style games where there are hundreds of units, but only the actions of the players are sent to every other players. The result of the actions are computed independently by all the players. Having all the players be in sync requires determinism in the calculations, which floating point calculations do not provide.

arjunnarayan|10 years ago

To add to this, when you're trying to minimize information flow among parties, you might have each player only send information to some other players (perhaps only the set that needs to know right now - think unit information from Player A to Player B. This is in the 'fog of war' for player C, so you don't send it. Later, however, the accumulated state needs to be sent to player C when it becomes relevant to her. I made this example up, but it's quite common to minimize the amount of information sent over the network in latency-sensitive networked multiplayer games.)

The short answer is that accumulated calculations deviate in floating point arithmetic, because floats don't behave like mathematical abstract numbers. One easy way to see this is that operations on floats are not associative, especially when you're working on numbers that differ greatly in magnitude. So you don't have the guarantee that (A * (B * C)) = ((A * B) * C). This can end up with state slowly deviating and it all falling apart.

The reason for this is fairly easy to see: floats are represented as A * 2^B (A is the 'mantissa', B is the 'exponent'). In double-precision floating points, the mantissa is 52 bits, and the exponent has 11 bits (1 is left over for the sign bit). So if you have a really small number, say X = 1.125125123 * 2^-22, you can store it with a lot of precision. If you have a really large number, say Y = 1.236235423 * 2^30, you can also store this with a lot of precision. But now add the two together, and you're basically going to throw out the smaller number entirely, because you don't have enough mantissa bits. So essentially (X + Y) gives you Y. Now if you had a third number, say Z = 1.5 * 2^-22, and you were trying to do Y + X + Z, then order matters. Because (X+Z) together might "bump" it up to the point where they are within a multiplicative factor of (2^52) of Y, so they have some impact on the total. But Y + X throws out X, and then (Y + X) + Z throws out Z. So (Y + X) + Z ≠ Y + (X + Z)

P.S. I didn't check the math but that's the basic argument. I could be off by one or two orders of magnitude, but the point stands that floats behave wackily if you care about correctness.

haberman|10 years ago

IEEE floating point with a specified rounding mode should be deterministic. Unfortunately the C and C++ standards allow calculations to have excess precision, removing this determinism. Other languages, notably JavaScript, do guarantee this determinism in the spec.

epsylon|10 years ago

The idea that floating point isn't deterministic is patently false. If it wasn't, that'd mean that every computer adds random noise to the calculation. That's nonsense. This myth seems to be propagated because fp is hard to work with and has a lot of quirks: its non uniform division of space, the x87 extended precision mode, the various rounding modes... But none of that stuff is intractable.

(Many games have shipped with fp in networked code. see http://gafferongames.com/networking-for-game-programmers/flo... and look for previous discussion here on HN, reddit...)

Dav3xor|10 years ago

Not every architecture has an FPU, and doing floating point in software is obviously pretty slow.

strager|10 years ago

A two-byte fixed-point number is likely to be less expensive to deal with than a two-byte floating-point number. (Promotion and demotion cost is pretty high for small floating-point types (bit masking, shifting, oring, etc.) compared to fixed-point promotion and demotion (shift).)

sdab|10 years ago

I figured performance came into it also. It would be nice to see some performance numbers though.

corysama|10 years ago

Lots of good answers in the comments here.

Similar to the PathEngine concept, when I compress 3D meshes to 6 bytes per xyz, I could use float16. But, there's no point to having most of the precision focused in the center of the mesh. The verts are usually fairly evenly distributed if not biased towards the outside edge of the bounding box.

Bit-level, cross-platform floating point determinism is harder than it should be. That and associativity is important in physics, financials, replays and distributed simulation.

The PlayStation1 and GameBoyAdvance didn't have FPU coprocessors. Software floating point was completely impractical. So, you pretty much had to use fixed point. To the point that the PS1 did have a fixed-point vector math coprocessor! :D