It is not hard to remember what int division is about, when your types are ints in code. It also comes up almost never, and isn't what floating-point rounding error means. You aren't multiplying money 99% of the time, and when you are, you don't care about exacting precision (e.g. 20% discount). Floating-point rounding error, on the other hand, is about how 0.1 + 0.2 != 0.3.
pie_flavor|9 months ago