top | item 47105515

(no title)

tripplyons | 9 days ago

There are many ways to compute the same matrix multiplication that apply the sum reduction in different orders, which can produce different answers when using floating point values. This is because floating point addition is not truly associative because of rounding.

discuss

order

spwa4|9 days ago

Is that really going to matter in FP32, FP16 or BF16? I would think models would be written so they'd be at least somewhat numerically stable.

Also if the inference provider guarantees specific hardware this shouldn't happen.

nomel|9 days ago

Wait, wouldn't it be more significant in low bit numbers, which is the whole reason they're avoided in maths applications? In any work I've ever done, low bit numbers were entirely the reason exact order was important, where float64 or float128makes it mostly negligible.