top | item 37448099

(no title)

imagainstit | 2 years ago

Floating point math is not associative: (a + b) + c != a + (b + c)

This leads to different results from accumulating sums in different orderings. Accumulating in different ordering is common in parallel math operations.

discuss

order

scarmig|2 years ago

So I guess here my question is why a GPU would perform accumulations in a nondeterministic way where the non-associativity of FP arithmetic matters. You could require that a + b + c always be evaluated left to right and then you've got determinism, which all things being equal is desirable. Presumably because relaxing that constraint allows for some significant performance benefits, but how? Something like avoiding keeping a buffer of all the weights*activations before summing?

imagainstit|2 years ago

Basically because it affects performance. You really don't want to write any buffers!

This is sort of a deep topic, so it's hard to give a concise answer but as an example: CuBLAS guarantees determinism, but only for the same arch and same library version (because the best performing ordering of operations depends on arch and implementation details) and does not guarantee it when using multiple streams (because the thread scheduling is non-deterministic and can change ordering).

Determinism is something you have to build in from the ground up if you want it. It can cost performance, it won't give you the same results between different architectures, and it's frequently tricky to maintain in the face of common parallel programming patterns.

Consider this explanation from the pytorch docs (particularly the bit on cuda convolutions):

https://pytorch.org/docs/stable/notes/randomness.html

SomewhatLikely|2 years ago

There has been speculation that GPT4 is a mixture of experts model, where each expert could be hosted on a different machine. As those machines may report their results to the aggregating machine in different orders then the results could be summed in different orders.

ossopite|2 years ago

for performance reasons, yes, I believe it's because the accumulation is over parallel computations so the ordering is at the mercy of the scheduler. but I'm not familiar with the precise details

edit: at 13:42 in https://www.youtube.com/watch?v=TB07_mUMt0U&t=13m42s there is an explanation of the phenomenon in the context of training but I suspect the same kind of operation is happening during inference

charcircuit|2 years ago

His point is that you do not have to rely on associative being true in order to run inference on a LLM.