When people wonder why floating point calculations sometimes don't perfectly match the expectations of a "correct" answer[1], inevitably people will often respond with the famous paper, "What Every Computer Scientist Should Know About Floating-Point Arithmetic"[2].
Unfortunately, the people who suggest that paper don't realize that it's a very technical treatise and not an appropriate introduction for the types of programmers who ask the question.
I've always thought a better answer was to have the programmer "construct" a toy floating point format (e.g. 8 bits instead of 32-bit-or 64-bits). The programmer would then notice that he has a finite representation of 256 possible "states" to represent -inf to 0 to +inf. The programmer would have to pick a range and a precision to "squeeze" the Real Numbers into that representation of 8 bits.
Design choice 1: I'll have my 256 possible states represent -1billion to +1billion. Well, since you can't overlay 256 states/values across 2 billion unique values, you will have huge "gaps" between numbers.
Design choice 2: I'll have my 256 possible states represent -10 to +10. With a smaller range, you can now increase the precision and represent fractional numbers with digits after the decimal point. But the range is very small. Also, you still have "gaps" where you can't represent most[3] fractional numbers.
The programmer would quickly notice that he'd run into some contradictions. No matter what he does, there will always be gaps. The lightbulb would then go on and then he'd immediately scale up the limitations inherent in 8bit floating point all the way to 64bit floating point and know exactly why 0.3333 * 3.0 does not exactly equal 1.0.
It had been on my vague "todo" list to write such a tutorial but your blog post already gets that "construction of floating point" idea nicely. The programmer can then explore more advanced topics of "normalization" and "bias" in designing a floating point format.
> Unfortunately, the people who suggest that paper don't realize that it's a very technical treatise and not an appropriate introduction for the types of programmers who ask the question.
Programmers of the 'types' (whatever those are) that ask such questions are exactly the target audience of that paper, technical treatment of a technical subject is a-ok.
If you need floating point you don't actually understand your problem used to be a common mantra back when floating point computation still incurred a significant speed penalty. We have come a long way since then but every computer programmer should read that paper and understand it if they want their programs to work properly when using floating point.
You can't really use a tool that you do not understand and using floating point has plenty of pitfalls.
Programmers tend to use floating point casually, either because their language of choice will switch to it when not told otherwise, because they think it is a magic solution to some kind of problem that they have (say counting beyond some arbitrary number), because they are lazy or because they need fractional representation of some number in a well defined range.
If the chain of computation is long enough, if the number of input variables is large enough and if the operations are varied enough this will lead to hard-to-diagnose bugs and problems if not enough understanding is used when implementing the code. FP is a powerful tool, but like any powerful tool it can hurt you if you don't understand it.
I recently came across this blog post [1] with the same title as that famous paper. But they switched out "scientist" for "programmer" and use a more graphical, less technical approach. It's probably better for this purpose.
EDIT: To clarify I mean "better than the famous paper". The blog post linked in this story is also very good, I'd say these two posts complement each other nicely.
> Unfortunately, the people who suggest that paper don't realize that it's a very technical treatise and not an appropriate introduction for the types of programmers who ask the question.
I think the main value of that paper for newbie programmers is not to learn all the nitty gritty details about floats (which would be futile), but instead get the idea that floats can, and will, behave unintuitively, and that they are not just decimal numbers like many beginner materials propose.
"No matter what he does, there will always be gaps."
Here is how to do this [1]: keep a fixed-size array of every number seen, and store numbers as index into the array. When or if [2] the array fills up, garbage collect by finding a number in the array that is fairly close and hasn't been used recently, and updating it to contain the value you need _now_. Yes, that may change values computed some time ago, but who cares? Your unit tests will run fine, it teaches you to defend your code against bits flipping in RAM, and it is not as if you remember that that fish you caught months ago was 1.8 meters long and not sqrt(3) meter.
[1] if you like designing esoteric languages.
[2] it wouldn't terribly surprise me to learn that many programs do not need that many different float values to survive long.
The best introduction I had was the back of my ZX Spectrum manual, which offered a very lucid and correct explanation of how maths really works on a binary chip.
Here is a nice challenge: Just like we have arbitrary precision integers (allocating more memory as needed) and arbitrary precision "fractional numbers", design a library providing arbitrary precision "computable numbers".
Floating point numbers can be justified by two criteria:
1) The distribution of typical numbers
2) The desired precision across a distribution
1: Floating point numbers suggest an exponential distribution, which comes up very often in science, engineering, etc. Very rarely we have real data neatly packet in a small [-a,a] range.
2: Floating point satisfy the following error metric approximately uniformly: for any -max < x < max, Error = float(x)/x; that is, the relative error is small. This again agrees with real world requirements for data, where we tolerate larger errors for larger numbers.
I think the easiest answer to that question is that there's rounding happening in almost every step where you do something with them. This also includes parsing from decimal floats in textual representation and outputting as decimal floats in textual representation.
The funny thing is that if you work with decimal floating points you usually don't get any of those effects because you often stay inside the accuracy/precision that the float can hold. Yet they're seen as useless.
One of the reasons I would be happier if there were more emphasis in real math/geometry algorithms on "range" processing, i.e. treating each "number" as a scaled "range" instead and figuring out how to combine these ranges in every step to get some stable solution. Most often I see programmers just bashing analytic formulas like they were computing something precisely and then being surprised that results are way off. Imagine computing curve intersections by a combination of analytical and iterative method and then figuring out if such an intersection is unique or if it is some existing endpoint...
Interval arithmetic [1] has been around since the 1950's, I believe, but for some reason it never caught on.
In a nutshell, interval arithmetic is about storing each imprecise number as a lower bound and upper bound. On each calculation, the lower bound is rounded down and the upper bound is rounded up. The existing libraries are well developed, and the performance hit is often surprisingly close to 2.
The problem is that the range will increase with each operation. E.g. the range in x*y is more than the range in x and so you need to keep track of those ranges. But the ranges themselves will be inaccurate if stored as floating point numbers.
Or just store them as a fraction if you really care about the value not getting corrupted.
> This is about floating point arithmetic in computers.
Its about binary floating point arithmetic in computers, though one could do a similar thing for, e.g., decimal floating point. (Though decimal floating point rounding issues will show up much more rarely in many real-world classes of application.)
That's the very interesting border between numbers as abstractions of quantities and numbers as element of a set (N, Z, R, C etc. The question of 0 is particularly puzzling to me.
Most numerical sets you're likely to run into (including N, Z, R, and C) are groups under addition, which (among other things) means `x + a = a ⇔ x = 0`
StackOverflow user tmyklebu provided an efficient algorithm to solve a more general floating-point equation, C1 + x = C2, using only computations in the very floating-point system that the equation is written in:
I'd be amazed if the hardware you're using and the software you're running doesn't have the "feature" that for some value of x, 1+x=1 and yet 1-x != 1. If you read the article you might understand why.
If you don't understand why, perhaps you could come back and ask specific questions.
For the concrete and practical minded who don't want to bother with understanding the theoretical arguments, here's some python:
#!/usr/bin/python
a = 1.
x = 1.
while a+x!=1.:
x = x/2
print 'x =', x
print 'a+x =', '%.18f' % (a+x)
print 'a-x =', '%.18f' % (a-x)
Output:
x = 1.11022302463e-16
a+x = 1.000000000000000000
a-x = 0.999999999999999889
Even leaving aside hardware and software implementations, have you never been in a situation where you put a number into a fixed-sized box, perhaps three digits?
[+] [-] jasode|11 years ago|reply
When people wonder why floating point calculations sometimes don't perfectly match the expectations of a "correct" answer[1], inevitably people will often respond with the famous paper, "What Every Computer Scientist Should Know About Floating-Point Arithmetic"[2].
Unfortunately, the people who suggest that paper don't realize that it's a very technical treatise and not an appropriate introduction for the types of programmers who ask the question.
I've always thought a better answer was to have the programmer "construct" a toy floating point format (e.g. 8 bits instead of 32-bit-or 64-bits). The programmer would then notice that he has a finite representation of 256 possible "states" to represent -inf to 0 to +inf. The programmer would have to pick a range and a precision to "squeeze" the Real Numbers into that representation of 8 bits.
Design choice 1: I'll have my 256 possible states represent -1billion to +1billion. Well, since you can't overlay 256 states/values across 2 billion unique values, you will have huge "gaps" between numbers.
Design choice 2: I'll have my 256 possible states represent -10 to +10. With a smaller range, you can now increase the precision and represent fractional numbers with digits after the decimal point. But the range is very small. Also, you still have "gaps" where you can't represent most[3] fractional numbers.
The programmer would quickly notice that he'd run into some contradictions. No matter what he does, there will always be gaps. The lightbulb would then go on and then he'd immediately scale up the limitations inherent in 8bit floating point all the way to 64bit floating point and know exactly why 0.3333 * 3.0 does not exactly equal 1.0.
It had been on my vague "todo" list to write such a tutorial but your blog post already gets that "construction of floating point" idea nicely. The programmer can then explore more advanced topics of "normalization" and "bias" in designing a floating point format.
[1] http://stackoverflow.com/questions/588004/is-floating-point-...
[2] http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.ht...
[3]http://math.stackexchange.com/questions/474415/intuitive-exp...
[+] [-] jacquesm|11 years ago|reply
Programmers of the 'types' (whatever those are) that ask such questions are exactly the target audience of that paper, technical treatment of a technical subject is a-ok.
If you need floating point you don't actually understand your problem used to be a common mantra back when floating point computation still incurred a significant speed penalty. We have come a long way since then but every computer programmer should read that paper and understand it if they want their programs to work properly when using floating point.
You can't really use a tool that you do not understand and using floating point has plenty of pitfalls.
Programmers tend to use floating point casually, either because their language of choice will switch to it when not told otherwise, because they think it is a magic solution to some kind of problem that they have (say counting beyond some arbitrary number), because they are lazy or because they need fractional representation of some number in a well defined range.
If the chain of computation is long enough, if the number of input variables is large enough and if the operations are varied enough this will lead to hard-to-diagnose bugs and problems if not enough understanding is used when implementing the code. FP is a powerful tool, but like any powerful tool it can hurt you if you don't understand it.
[+] [-] semi-extrinsic|11 years ago|reply
EDIT: To clarify I mean "better than the famous paper". The blog post linked in this story is also very good, I'd say these two posts complement each other nicely.
[1] http://blog.reverberate.org/2014/09/what-every-computer-prog...
[+] [-] zokier|11 years ago|reply
I think the main value of that paper for newbie programmers is not to learn all the nitty gritty details about floats (which would be futile), but instead get the idea that floats can, and will, behave unintuitively, and that they are not just decimal numbers like many beginner materials propose.
[+] [-] Someone|11 years ago|reply
Here is how to do this [1]: keep a fixed-size array of every number seen, and store numbers as index into the array. When or if [2] the array fills up, garbage collect by finding a number in the array that is fairly close and hasn't been used recently, and updating it to contain the value you need _now_. Yes, that may change values computed some time ago, but who cares? Your unit tests will run fine, it teaches you to defend your code against bits flipping in RAM, and it is not as if you remember that that fish you caught months ago was 1.8 meters long and not sqrt(3) meter.
[1] if you like designing esoteric languages.
[2] it wouldn't terribly surprise me to learn that many programs do not need that many different float values to survive long.
[+] [-] jordigh|11 years ago|reply
http://wiki.octave.org/FAQ#Why_is_this_floating_point_comput...
I think people just forget that when they see "0.1", that this can't be represented exactly in binary.
I also link to the following long-winded explanation of how floats work:
http://floating-point-gui.de/
[+] [-] lugg|11 years ago|reply
Thank you. An upvote, just doesn't really cover it this time.
[+] [-] wldcordeiro|11 years ago|reply
[+] [-] nitrogen|11 years ago|reply
Examples:
- https://en.wikipedia.org/wiki/Half-precision_floating-point_...
- https://en.wikipedia.org/wiki/Double-precision_floating-poin...
- https://en.wikipedia.org/wiki/IEEE_754-1985#Representation_o...
Pages with loads of information but no bitwise diagrams:
- https://en.wikipedia.org/wiki/Floating_point#IEEE_754:_float...
- https://en.wikipedia.org/wiki/IEEE_floating_point
[+] [-] rodgerd|11 years ago|reply
[+] [-] thomasahle|11 years ago|reply
[+] [-] darkmighty|11 years ago|reply
1) The distribution of typical numbers
2) The desired precision across a distribution
1: Floating point numbers suggest an exponential distribution, which comes up very often in science, engineering, etc. Very rarely we have real data neatly packet in a small [-a,a] range.
2: Floating point satisfy the following error metric approximately uniformly: for any -max < x < max, Error = float(x)/x; that is, the relative error is small. This again agrees with real world requirements for data, where we tolerate larger errors for larger numbers.
[+] [-] legulere|11 years ago|reply
The funny thing is that if you work with decimal floating points you usually don't get any of those effects because you often stay inside the accuracy/precision that the float can hold. Yet they're seen as useless.
[+] [-] ColinWright|11 years ago|reply
Thank you.
I think I've sent a collection of suggested corrections via the web site, I include just the technical one here:
You wrote:
This is missing a minus sign at the very end.[+] [-] gus_massa|11 years ago|reply
I'll probably fix all the other errors/typos you send in the mail, but it will take a few minutes.
[+] [-] bitL|11 years ago|reply
[+] [-] maho|11 years ago|reply
In a nutshell, interval arithmetic is about storing each imprecise number as a lower bound and upper bound. On each calculation, the lower bound is rounded down and the upper bound is rounded up. The existing libraries are well developed, and the performance hit is often surprisingly close to 2.
[1] http://en.m.wikipedia.org/wiki/Interval_arithmetic
[+] [-] bbcbasic|11 years ago|reply
Or just store them as a fraction if you really care about the value not getting corrupted.
[+] [-] meragrin|11 years ago|reply
[+] [-] madez|11 years ago|reply
Arbitray-precision floating point arithmetic like e.g. GMP offers doesn't suffer these issues.
[+] [-] dragonwriter|11 years ago|reply
Its about binary floating point arithmetic in computers, though one could do a similar thing for, e.g., decimal floating point. (Though decimal floating point rounding issues will show up much more rarely in many real-world classes of application.)
[+] [-] UhUhUhUh|11 years ago|reply
[+] [-] wyager|11 years ago|reply
[+] [-] pascal_cuoq|11 years ago|reply
http://stackoverflow.com/a/24223668/139746
[+] [-] frozenport|11 years ago|reply
Or just fixed precision? For example the calculation is intuitive for `int`s.
[+] [-] eccstartup|11 years ago|reply
[+] [-] Sevzinn|11 years ago|reply
[+] [-] t0mbstone|11 years ago|reply
If this is not the case, then you have a crappy hardware or software math implementation that doesn't follow logic. Duh.
[+] [-] ColinWright|11 years ago|reply
If you don't understand why, perhaps you could come back and ask specific questions.
For the concrete and practical minded who don't want to bother with understanding the theoretical arguments, here's some python:
Output:[+] [-] GFK_of_xmaspast|11 years ago|reply
[+] [-] Dylan16807|11 years ago|reply
1.002 has to be written as 1.00
.998 can be written as .998
Perfectly logical.