top | item 18832444

(no title)

anothergoogler | 7 years ago

DAE mind blown by imprecision in float arith?

Edit: Down-Voters go ahead and explain the LIE that is computer accuracy in the face of the linked demonstration. Computers, can't trust em.

discuss

order

throwawaymath|7 years ago

I didn't downvote you, but this isn't a problem with computers. It's a problem with the (mis)use of floats.

Floats are not decimals. That's unfortunately a really, really common misconception, owing in part to poor education. Developers reach for floats to represent decimals without thinking about the precision ramifications.

When you're working with decimals that don't need a lot of precision this doesn't generally come up (and naturally, those are the numbers typically used in textbooks). But when you start doing floating point arithmetic with decimals that require significant precision, things get bizarre very fast.

Unfortunately if a developer isn't expecting it, that's likely to happen in production processing code at a very inopportune time. But the computer is just doing what it's told - we have the tools to support safe and precise arithmetic with decimals that need it. It's a matter of knowing how and when to use floating point.

jackfraser|7 years ago

You're probably being downvoted for posting like you're on some other site, moreso than your sentiment that this is just a simple CS 101 thing that people ought to know.

Thing is, a lot of people don't take CS courses, and have to learn this as they go along. More importantly, the naive cases all seem to work fine - it's only when you get to increasing precision / scales that you notice the cracks in the facade, and that's only if you have something that depends on the real accuracy (e.g. real world consequences from being wrong) or if someone bothers to go and check (using some other calculator that gives more precise results).

My own view on it is that it's past bloody time for languages to offer a fully abstracted class of real numbers with correct, arbitrary precision math - obviating the need for the developer to specify integer, float, long, etc. I don't mean that every language should act like this, but ones aimed at business software development, for example, would do well to provide a first-class primary number type that simply covers all of this properly.

Yes, I can understand that the performance will not be ideal in all cases, but the tradeoff in terms of accuracy, starting productivity, and avoiding common problems would probably be worth it for a pretty big subset of working developers.

haldean|7 years ago

What is "properly" though? There's many real numbers that don't have finite representation. Arbitrary precision is all well and good, but as long as you're expressing things as binary-mantissa-times-2^x, you aren't going to be able to precisely represent 0.3. You could respond by saying that languages should only have rationals, not reals, but then you lose the ability to apply transcendental functions to your numbers, or to use irrational numbers like pi or e.

Performance is only part of the problem, and what it prevents is more-precise floats (or unums or decimal floats or whatever). The other part of the problem is that we want computers with a finite amount of memory to represent numbers that are mathematically impossible to fit in that memory, so we have to work with approximations. IEEE-754 is a really fast approximator that does a good job of covering the reals with integers at magnitudes that people tend to use, so it's longevity makes sense to me.

mcguire|7 years ago

Exact real arithmetic is an open research problem (and slow, as well). Arbitrary precision has its own can of worms and is slow, too.