(no title)
anothergoogler | 7 years ago
Edit: Down-Voters go ahead and explain the LIE that is computer accuracy in the face of the linked demonstration. Computers, can't trust em.
anothergoogler | 7 years ago
Edit: Down-Voters go ahead and explain the LIE that is computer accuracy in the face of the linked demonstration. Computers, can't trust em.
throwawaymath|7 years ago
Floats are not decimals. That's unfortunately a really, really common misconception, owing in part to poor education. Developers reach for floats to represent decimals without thinking about the precision ramifications.
When you're working with decimals that don't need a lot of precision this doesn't generally come up (and naturally, those are the numbers typically used in textbooks). But when you start doing floating point arithmetic with decimals that require significant precision, things get bizarre very fast.
Unfortunately if a developer isn't expecting it, that's likely to happen in production processing code at a very inopportune time. But the computer is just doing what it's told - we have the tools to support safe and precise arithmetic with decimals that need it. It's a matter of knowing how and when to use floating point.
Sharlin|7 years ago
[1] https://en.wikipedia.org/wiki/Unit_in_the_last_place
jackfraser|7 years ago
Thing is, a lot of people don't take CS courses, and have to learn this as they go along. More importantly, the naive cases all seem to work fine - it's only when you get to increasing precision / scales that you notice the cracks in the facade, and that's only if you have something that depends on the real accuracy (e.g. real world consequences from being wrong) or if someone bothers to go and check (using some other calculator that gives more precise results).
My own view on it is that it's past bloody time for languages to offer a fully abstracted class of real numbers with correct, arbitrary precision math - obviating the need for the developer to specify integer, float, long, etc. I don't mean that every language should act like this, but ones aimed at business software development, for example, would do well to provide a first-class primary number type that simply covers all of this properly.
Yes, I can understand that the performance will not be ideal in all cases, but the tradeoff in terms of accuracy, starting productivity, and avoiding common problems would probably be worth it for a pretty big subset of working developers.
haldean|7 years ago
Performance is only part of the problem, and what it prevents is more-precise floats (or unums or decimal floats or whatever). The other part of the problem is that we want computers with a finite amount of memory to represent numbers that are mathematically impossible to fit in that memory, so we have to work with approximations. IEEE-754 is a really fast approximator that does a good job of covering the reals with integers at magnitudes that people tend to use, so it's longevity makes sense to me.
mcguire|7 years ago