Once I wrote a library for double-to-string conversion and vice versa, which handles such roundings nicely: https://github.com/mkupchik/dconvstr
Key idea is not just to map binary floating point value X to a decimal floating point value Y, but instead (in extended precision, with 64-bit mantissa) compute an interval of decimal floating point values [Y1, Y2] which maps back to X (in standard precision, with 53-bit mantissa). Then choose such Y from [Y1, Y2] that Y has the shortest decimal representation.
So for instance 0.29999999999999993 + 0.00000000000000003 -> 0.3, but 0.30000000000000002 -> 0.30000000000000004
Note that this will still not solve the 0.1 + 0.2 problem from the OP, since the nearest float to 0.3 is not actually the same as the nearest float to 0.1 + 0.2.
Someone should figure out the mean length of time between article repeats on HN. And then find some correlation with university degrees and job tenure.
A maths professor, during their lecture, says "obviously, we now have equation 42" when a student interrupts, asking "how is that obvious?"
The professor looks back at the blackboard, starts to speak, but then remains silent as a consternated look falls across their face. After ten minutes of silent pondering, they erase three blackboards and manically fills them with equations, derivations, and other expressions. After another half-hour of furious scribbling--eventually filling both sides of two more free-standing chalkboards--they exclaim, "AHA! It is obvious!"
It's phrased incredibly badly, but basically you can only write a fraction with a finite number of decimals if the denominator divides 10^k (for some k). This means the denominator can't have prime factors other than 2 and 5 (which are the two prime factors of the base). This last statement isn't exactly trivial, but is reasonably easy to prove.
It turns out that humans like numbers that are evenly spaced like 1/10, 2/10 (1/5), 3/10, 4/10. That all seems pretty evenly spaced to us, but its actually totally arbitrary that we have 10 handy (ugh the puns) appendages. If we chill with space aliens they will think numbers are evenly spaced like 1/14, 2/14 (aka 1/7), 3/14, 4/14 (aka 2/7)... much like jamming a metric socket on an imperial nut to fix your car, the amount of error varies wildly base on size. You can get away with a 9/16 socket on a 10 mm nut most times, but a 15/16th socket will probably require a hammer to fit a 24mm nut. Or just round off all the nuts with a vise-grip pliers. Anyway the space alien's 7/14 is actually a perfect match to our 5/10 but the amount of error varies from not much, to quite a bit, in the other attempts at interplanetary standardization. Likewise if you think of the computer as a space alien (not too far from truth, I sometimes feel) it uses binary and thinks numbers like 1/8, 2/8 (1/4), 3/8, are evenly spaced. Or as some specific example expressing our 1/2 in floating point is pretty easy for most computers while expressing 1/5 or 1/3 is somewhat less successful, or somewhat far in error compared to our closest 1/10th based numbers.
In the old days like half a century ago there were competing software floating point designs with varying levels of success vs speed. Everything is super boring today and standardized. 72 bit pdp-10 floating point was from a romantic adventurous era, like an Indiana Jones movie (well, one of the good ones). IEEE 754 is modern and better in all regards, yet has all the panache and style of a modern econobox commuter car. Think of the glory of PDP8 Fortran floating point spread unevenly across three words of four octal digits, 2 bits of signs, 2 bits MSB of mantissa, and 8 bits of exponent in the first 12 bit octal word, now that is something glorious to wake up to in the morning.
If you can write n=p/q with p and q relatively prime using d digits after the decimal point in base 10, 10^d times n is an integer. We also have
10^d*n = 10^d*p/q
so 10^d*p must be divisible by q. Since p and q are relatively prime, 10^d must be divible by q. That's only possible if all prime factors of q are 2 or 5.
I get angry every time I read about floating point being discussed by people who don't understand how numbers work, but I am going to try not to rant and be constructive.
Think of if this way. You can express the fraction 1/2 in decimal (0.5) and binary because 2 is evenly divisible into both 10 and 2. You can't represent 1/3 in either but you could in trinary (0, 1, 2 - it would be 0.1) Now, why programmers feel the numbers you can't represent exactly in binary are somehow worse than the ones in decimal baffles me. I guess we are just used to them and anything else seems wrong. But they are just as legitimate.
Whenver IEEE 754 and its quirks are discussed a potential alternative (but so far without hardware support) called Unum - Universal Numbers - should not go unmentioned:
IEEE 754 is not a trivial standard (it earned Kahan a Turing award). Error modes for IEEE 754 are precisely defined, even though it requires a lot of effort to understand what they mean. (For example, overflow triggers an exception, but gradual underflow is allowed.) Going beyond it requires some serious effort, and unums seem not to be the solution.
A good book to understand the IEEE 754 standard is the book by Michael Overton. The quoted article is unfortunately an example of floating-point "scaremongering". Floating point arithmetic is not approximate, it is accurately defined with precise error modes. Base-10 arithmetic is not however, the model to understand it, but rather "units in last place".
There are a huge pile of libraries for doing bignum, rational, fixed point, decimal, complex, vector, polynomial, symbolic, etc. math in Javascript. If you want you can even write one for 7-adic arithmetic, Eisenstein integers, numbers expressed in the golden ratio base, or points on the surface of a torus.
There’s no language that can build in all possible number systems, and it’s not really the place of a language to try.
It would be nice if JavaScript made a better type distinction between integers and floats though.
Here's something that would probably help lessen the confusion: don't allow or atleast warn about implicit conversions in binary floats in source code literals.
float foo = 0.3; // warning: invalid binary float value
float bar = 0.3f; // ok, converts into nearest bin float
decimal qux = 0.3m; // ok
decimal feh = 0.3; // why not; will be exactly 0.3
All in all, languages go for decimal floats by default but then the implicit conversions ruin it for everybody who's still learning.
Good suggestion, but it still wouldn't prevent beginners from just learning the 0.3f notation as an idiom and skipping the explanation. The trouble with floats is that understanding them depends on knowledge of binary, scientific notation, etc. It's a lot for someone who is only just getting to grips with how a program works.
For people using C or C++ I can recommend using decimal floating point (which may be added to C++20 in the standard).
Contrary to the "default" floating point, which are base 2 and do not support precise presentations of certain numbers, you can use a decimal floating point system which uses base 10 and therefore allows exact and precise presentation and solves the problem mentioned on the page.
Also important: since 2008 this is standardized in IEEE 754-2008 which added support for decimals.
If you want to represent currency, for example, you should not use decimal floating point. You should integers, specifically you should use an integer number of tenths of cents, which is pretty widely agreed as the standard unit of currency in a computer (or tenths of yen, for example). You need to be extremely careful about overflow, but you need to be anyway, and should almost certainly just use arbitrary-precision integers.
My personal view is decimal arithmetic should be the default and floating-point should be the library in scripting languages. And probably also Java/C#/Go-type languages.
My personal view is that the programmer needs to know the difference because there are important tradeoffs to consider.
I do however think it should be easier to use decimal arithmetic. I've written financial apps in both Java and JavaScript and I wish both languages had native syntax and support for decimal numbers and arithmetic. JavaScript has nothing built-in and Java's BigDecimal class is clunky to use.
Decimal only takes care of a subset of cases. I.e. those where the source data nicely lines up in decimal. Good for finance (although I think Britain made a mistake when it gave up the old penny and the shilling). But even there, you sometimes have to divide by numbers like 12 and 365.
The point is NO representation is going to be idiot proof. So you just need to make a sufficient set of representations easy to use, and then try to teach the "idiots" what the underling issues are.
I'm not sure this is such a good idea. I love rational datatype, but it's too easy to shoot yourself in the foot with simple numerical procedures resulting in gigantic bignum denominators.
Rational numbers are horrible for most numerical computing. The denominators double in length with every multiplication, so computation speed gets slow exponentially.
Any time there’s any degree of uncertainty about a quantity (e.g. it comes from a physical measurement) there’s also no longer any advantage to using rational arithmetic. This turns out to encompass most practical situations.
Rational arithmetic also breaks down entirely in the face of square roots or trig functions, unless you go for a fully symbolic computation environment, which gets even much slower.
Rational arithmetic is mostly nice when the problems have been carefully chosen so the operations will stay rational and the answers will work out nicely, e.g. in high school homework.
default doesn't mean only, from memory Rakudo smooshes Rats into floating point Nums when the denominator gets above 2 to the 64th or so.
FatRat are an infinite precision rat. FatRats, like RATs are Cool (which is a double entendre, because Cool means it knows how to be a string, which is apparently "Cool") The default recently has not been fatrat but one of the many benefits of a language under development for 20 years is that quite possibly that default was changed last weekend. Or maybe fatrat were default for a painful week back in '02. Probably not, but it could have happened.
Perl does the right thing but if you're really unhappy about not using floats you can force a .num conversion into floating point or you could express a number in scientific notation to force floats. Perl really wants to make a division problem into a Rat unless you work hard to stop it.
In general, for the past 30 years or so, if you're doing something weird, Perl will work and probably give faster and possibly more accurate results than most other tools, but the rest of the world will not be interested in debugging perl6 but will instead be asking "why you use perl6 instead of Gnu-R or mathematica or ?" or whatever is trendy momentarily today. Perl will never be trendy yet the whole world runs on it.
The Common Lisp answer is 0.3 and not 0.30000000000000004 because those are single float literals: for double float literals, we have (+ 0.1d0 0.2d0) => 0.30000000000000004d0.
I would imagine this is the case for several of the other examples, too.
These examples all seem to be using double precision floating point, which might be misleading. For example, the C++ snippet is adding two double literals, not floating point literals. The result is even more inaccurate using single precision arithmetic:
Side anecdote: Many, many years ago, I wrote a lease payment plan calculator for a big bank-linked leasing company when they moved from AS/400 to .Net. I made sure all rounding was policy driven, not only in precision, but also in where roundings would occur during the calculations. 1 cent differences do matter in such environments. Somehow it passed all the extensive regression tests that compared to their mainframe code out of the box with the config based on nothing more than my guesses. AFAIK the code is still in production to this date.
Only if you're trying to scare the newbies away. This stuff does need to be learned at some point but cs101 should be about getting a feel for programming in general.
Focusing on the basics of objects/classes functions (with mandatory source control) is smart before killing their will to learn with nasty edge cases
Base 2 floating point is perfectly OK. The problem is people using floating point not knowing that IEE754 floating point representation is binary, and not decimal, so most decimal constants will not have an exact match. Said that, many implementations do buggy binary <-> decimal conversions, not because of the limit accuracy in the conversion, but because of "creative" rounding criteria.
More interesting is exactly why floating point 0.1 + 0.2 != 0.3, and it's due to the way rounding is defined in the IEEE-754 standard:
> An implementation of this standard shall provide round to nearest as the default rounding mode. In this mode the representable value nearest to the infinitely precise result shall be delivered; if the two nearest representable values are equally near, the one with its least significant bit zero shall be delivered.
If we convert from decimal to double precision (64-bit) floating point, here is how they are represented in hexadecimal and binary:
However we only have 52 bits to represent the mantissa (again marked with ^), so the result above has to be rounded. Both possibilities for rounding are equidistant from the result:
So according to the specification, the option with the least significant bit of zero is chosen.
Converting this back to floating point format we get 0x3FD3333333333334. Note that the least significant four bits of the mantissa are 0100, which corresponds to the trailing 4 in the hexadecimal representation.
This is not equal to 0x3FD3333333333333 (the result of conversion from decimal 0.3, and also what would have been the result here if the rounding was specified the other way.)
Why is that? Because I made a "populist" decision purely for the sake of better "optics". The default printing precision for floating-point values is set at just enough digits to truncate things in this manner:
2> *print-flo-precision*
15
The cost to doing that is that the printed representation of floats isn't guaranteed to have read/print consistency: what we print doesn't always read back as the same value which in other words means that we don't have a unique printed representation for each possible value. That default of 15 isn't my favorite language design decision. It's justifiable and I believe in it, but not as wholeheartedly as in some other design decisions.
In any case, language users who need to record every bit of the IEEE 754 double in the printed representation are not just hung out to dry: they can set or override this special variable to 17.
(One value here is the `prinl` output, under the dynamic scope of the special variable override; the other is the result value, printed in the REPL, with the top-level binding of it, still at 15 digits.)
Oh, and by the way, we dohn't have to add 0.1 to 0.2 to demonstrate this. Either of those values by itself will do:
Why 15 is used the default precision rather than, say, 14, is that 15 still assures consistency in one direction: if any number with up to 15 digits of precision is input into the system, upon printing it is reproduced exactly in all 15 digits. And 15 is the highest number of digits for which this is possible with an IEEE 64 bit double.
For this reason R has a function for comparing results of floating point math: all.equal()
it is quite tricky as it might show that .1+.2 is 0.3, but this is only due to the fact that default output is a subject of default formatting (i.e. occulting more precision numbers)
[+] [-] mkup|9 years ago|reply
Key idea is not just to map binary floating point value X to a decimal floating point value Y, but instead (in extended precision, with 64-bit mantissa) compute an interval of decimal floating point values [Y1, Y2] which maps back to X (in standard precision, with 53-bit mantissa). Then choose such Y from [Y1, Y2] that Y has the shortest decimal representation.
[+] [-] TazeTSchnitzel|9 years ago|reply
[+] [-] jacobolus|9 years ago|reply
See also http://www.netlib.org/fp/ http://web.archive.org/web/20060908072403/http://ftp.ccs.neu...
So for instance 0.29999999999999993 + 0.00000000000000003 -> 0.3, but 0.30000000000000002 -> 0.30000000000000004
Note that this will still not solve the 0.1 + 0.2 problem from the OP, since the nearest float to 0.3 is not actually the same as the nearest float to 0.1 + 0.2.
[+] [-] dietrichepp|9 years ago|reply
http://www.netlib.org/fp/dtoa.c
[+] [-] legulere|9 years ago|reply
[+] [-] okket|9 years ago|reply
https://news.ycombinator.com/item?id=10558871 (1.5 years ago, 240 comments)
https://news.ycombinator.com/item?id=1846926 (6.5 years ago, 128 comments)
[+] [-] dxhdr|9 years ago|reply
[+] [-] mrcactu5|9 years ago|reply
[+] [-] msie|9 years ago|reply
In a way, not so simple (obvious to you? not to me)
[+] [-] joatmon-snoo|9 years ago|reply
The professor looks back at the blackboard, starts to speak, but then remains silent as a consternated look falls across their face. After ten minutes of silent pondering, they erase three blackboards and manically fills them with equations, derivations, and other expressions. After another half-hour of furious scribbling--eventually filling both sides of two more free-standing chalkboards--they exclaim, "AHA! It is obvious!"
[+] [-] contravariant|9 years ago|reply
[+] [-] VLM|9 years ago|reply
https://en.wikipedia.org/wiki/Rational_data_type
Anyway...
It turns out that humans like numbers that are evenly spaced like 1/10, 2/10 (1/5), 3/10, 4/10. That all seems pretty evenly spaced to us, but its actually totally arbitrary that we have 10 handy (ugh the puns) appendages. If we chill with space aliens they will think numbers are evenly spaced like 1/14, 2/14 (aka 1/7), 3/14, 4/14 (aka 2/7)... much like jamming a metric socket on an imperial nut to fix your car, the amount of error varies wildly base on size. You can get away with a 9/16 socket on a 10 mm nut most times, but a 15/16th socket will probably require a hammer to fit a 24mm nut. Or just round off all the nuts with a vise-grip pliers. Anyway the space alien's 7/14 is actually a perfect match to our 5/10 but the amount of error varies from not much, to quite a bit, in the other attempts at interplanetary standardization. Likewise if you think of the computer as a space alien (not too far from truth, I sometimes feel) it uses binary and thinks numbers like 1/8, 2/8 (1/4), 3/8, are evenly spaced. Or as some specific example expressing our 1/2 in floating point is pretty easy for most computers while expressing 1/5 or 1/3 is somewhat less successful, or somewhat far in error compared to our closest 1/10th based numbers.
In the old days like half a century ago there were competing software floating point designs with varying levels of success vs speed. Everything is super boring today and standardized. 72 bit pdp-10 floating point was from a romantic adventurous era, like an Indiana Jones movie (well, one of the good ones). IEEE 754 is modern and better in all regards, yet has all the panache and style of a modern econobox commuter car. Think of the glory of PDP8 Fortran floating point spread unevenly across three words of four octal digits, 2 bits of signs, 2 bits MSB of mantissa, and 8 bits of exponent in the first 12 bit octal word, now that is something glorious to wake up to in the morning.
[+] [-] Someone|9 years ago|reply
[+] [-] protonfish|9 years ago|reply
Think of if this way. You can express the fraction 1/2 in decimal (0.5) and binary because 2 is evenly divisible into both 10 and 2. You can't represent 1/3 in either but you could in trinary (0, 1, 2 - it would be 0.1) Now, why programmers feel the numbers you can't represent exactly in binary are somehow worse than the ones in decimal baffles me. I guess we are just used to them and anything else seems wrong. But they are just as legitimate.
[+] [-] Dylan16807|9 years ago|reply
"Moving the decimal point divides by 10. You can ONLY make fractions of 10 via the decimal point."
That plus a couple examples should do fine.
No need to mention prime numbers. No need to use variables.
[+] [-] micahbright|9 years ago|reply
FTFY
[+] [-] jacobolus|9 years ago|reply
[+] [-] zeotroph|9 years ago|reply
https://en.wikipedia.org/wiki/Unum_(number_format)
Previous discussions (>2y old):
https://news.ycombinator.com/item?id=9943589 and
https://news.ycombinator.com/item?id=10245737
A slideset: https://www.slideshare.net/insideHPC/unum-computing-an-energ...
[+] [-] sn41|9 years ago|reply
http://people.eecs.berkeley.edu/~wkahan/UnumSORN.pdf
IEEE 754 is not a trivial standard (it earned Kahan a Turing award). Error modes for IEEE 754 are precisely defined, even though it requires a lot of effort to understand what they mean. (For example, overflow triggers an exception, but gradual underflow is allowed.) Going beyond it requires some serious effort, and unums seem not to be the solution.
A good book to understand the IEEE 754 standard is the book by Michael Overton. The quoted article is unfortunately an example of floating-point "scaremongering". Floating point arithmetic is not approximate, it is accurately defined with precise error modes. Base-10 arithmetic is not however, the model to understand it, but rather "units in last place".
[+] [-] dnautics|9 years ago|reply
https://youtu.be/aP0Y1uAA-2Y
[+] [-] pcprincipal|9 years ago|reply
For those interested, the largest remainder method (https://gist.github.com/hijonathan/e597addcc327c9bd017c) is useful for dealing with this.
[+] [-] mcculley|9 years ago|reply
Yes, your language is broken if you have no way out of floating point math. JavaScript, for example, is fundamentally broken.
[+] [-] jacobolus|9 years ago|reply
There’s no language that can build in all possible number systems, and it’s not really the place of a language to try.
It would be nice if JavaScript made a better type distinction between integers and floats though.
[+] [-] pvdebbe|9 years ago|reply
[+] [-] septimus111|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] macqm|9 years ago|reply
Decimal uses 128 bits to hold it's data and seems to handle rounding differently to System.Double, e.g. 0.1 and 0.2 using decimals evaulates to 0.3:
> 0.1M + 0.2M 0.3
You can read more about decimal here: https://msdn.microsoft.com/en-us/library/364x0z75.aspx
[+] [-] kirab|9 years ago|reply
Contrary to the "default" floating point, which are base 2 and do not support precise presentations of certain numbers, you can use a decimal floating point system which uses base 10 and therefore allows exact and precise presentation and solves the problem mentioned on the page.
Also important: since 2008 this is standardized in IEEE 754-2008 which added support for decimals.
Explanation of decimal floating points: https://en.m.wikipedia.org/wiki/Decimal_floating_point
Libraries:
https://software.intel.com/en-us/articles/intel-decimal-floa...
http://www.bytereef.org/mpdecimal/
http://speleotrove.com/decimal/
C++ Standard Proposals:
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n340...
http://open-std.org/JTC1/SC22/WG21/docs/papers/2014/n3871.ht...
[+] [-] kr7|9 years ago|reply
[+] [-] milesrout|9 years ago|reply
If you want to represent currency, for example, you should not use decimal floating point. You should integers, specifically you should use an integer number of tenths of cents, which is pretty widely agreed as the standard unit of currency in a computer (or tenths of yen, for example). You need to be extremely careful about overflow, but you need to be anyway, and should almost certainly just use arbitrary-precision integers.
[+] [-] smitherfield|9 years ago|reply
[+] [-] richdougherty|9 years ago|reply
I do however think it should be easier to use decimal arithmetic. I've written financial apps in both Java and JavaScript and I wish both languages had native syntax and support for decimal numbers and arithmetic. JavaScript has nothing built-in and Java's BigDecimal class is clunky to use.
[+] [-] adrianratnapala|9 years ago|reply
The point is NO representation is going to be idiot proof. So you just need to make a sufficient set of representations easy to use, and then try to teach the "idiots" what the underling issues are.
[+] [-] Grue3|9 years ago|reply
I'm not sure this is such a good idea. I love rational datatype, but it's too easy to shoot yourself in the foot with simple numerical procedures resulting in gigantic bignum denominators.
[+] [-] jacobolus|9 years ago|reply
Any time there’s any degree of uncertainty about a quantity (e.g. it comes from a physical measurement) there’s also no longer any advantage to using rational arithmetic. This turns out to encompass most practical situations.
Rational arithmetic also breaks down entirely in the face of square roots or trig functions, unless you go for a fully symbolic computation environment, which gets even much slower.
Rational arithmetic is mostly nice when the problems have been carefully chosen so the operations will stay rational and the answers will work out nicely, e.g. in high school homework.
[+] [-] VLM|9 years ago|reply
FatRat are an infinite precision rat. FatRats, like RATs are Cool (which is a double entendre, because Cool means it knows how to be a string, which is apparently "Cool") The default recently has not been fatrat but one of the many benefits of a language under development for 20 years is that quite possibly that default was changed last weekend. Or maybe fatrat were default for a painful week back in '02. Probably not, but it could have happened.
Perl does the right thing but if you're really unhappy about not using floats you can force a .num conversion into floating point or you could express a number in scientific notation to force floats. Perl really wants to make a division problem into a Rat unless you work hard to stop it.
In general, for the past 30 years or so, if you're doing something weird, Perl will work and probably give faster and possibly more accurate results than most other tools, but the rest of the world will not be interested in debugging perl6 but will instead be asking "why you use perl6 instead of Gnu-R or mathematica or ?" or whatever is trendy momentarily today. Perl will never be trendy yet the whole world runs on it.
[+] [-] YZF|9 years ago|reply
http://www.rexxla.org/events/2001/mike/rexxsy01.PDF
[+] [-] hsivonen|9 years ago|reply
[+] [-] gsg|9 years ago|reply
I would imagine this is the case for several of the other examples, too.
[+] [-] lispm|9 years ago|reply
a) specify the default format, as above
b) or use s, f, d, and l to specify the format (short float, single float, double-float, long-float). Example 1.0s0, 1.0f0, 1.0d0, 1.0l0
[+] [-] unfamiliar|9 years ago|reply
[+] [-] PeterStuer|9 years ago|reply
[+] [-] ayush--s|9 years ago|reply
[+] [-] derpadelt|9 years ago|reply
[+] [-] gnarbarian|9 years ago|reply
Focusing on the basics of objects/classes functions (with mandatory source control) is smart before killing their will to learn with nasty edge cases
[+] [-] faragon|9 years ago|reply
[+] [-] besselheim|9 years ago|reply
> An implementation of this standard shall provide round to nearest as the default rounding mode. In this mode the representable value nearest to the infinitely precise result shall be delivered; if the two nearest representable values are equally near, the one with its least significant bit zero shall be delivered.
If we convert from decimal to double precision (64-bit) floating point, here is how they are represented in hexadecimal and binary:
Taking 0.1 as an example, here is what its binary representation actually means: The exponent is encoded as its offset from -1023, so in this case we have 01111111011 which is decimal 1019, making the exponent 1019-1023 = -4.The mantissa (BBBB…) is an encoding of the binary number 1.BBBB…, so with an exponent of -4 that makes the actual number 0.0001BBBB….
Applying this for each of these numbers:
Then if we add 0.1 + 0.2, this is the result: However we only have 52 bits to represent the mantissa (again marked with ^), so the result above has to be rounded. Both possibilities for rounding are equidistant from the result: So according to the specification, the option with the least significant bit of zero is chosen.Converting this back to floating point format we get 0x3FD3333333333334. Note that the least significant four bits of the mantissa are 0100, which corresponds to the trailing 4 in the hexadecimal representation.
This is not equal to 0x3FD3333333333333 (the result of conversion from decimal 0.3, and also what would have been the result here if the rounding was specified the other way.)
Therefore, floating point 0.1 + 0.2 != 0.3.
[+] [-] kazinator|9 years ago|reply
In any case, language users who need to record every bit of the IEEE 754 double in the printed representation are not just hung out to dry: they can set or override this special variable to 17.
(One value here is the `prinl` output, under the dynamic scope of the special variable override; the other is the result value, printed in the REPL, with the top-level binding of it, still at 15 digits.)Oh, and by the way, we dohn't have to add 0.1 to 0.2 to demonstrate this. Either of those values by itself will do:
Why 15 is used the default precision rather than, say, 14, is that 15 still assures consistency in one direction: if any number with up to 15 digits of precision is input into the system, upon printing it is reproduced exactly in all 15 digits. And 15 is the highest number of digits for which this is possible with an IEEE 64 bit double.[+] [-] haddr|9 years ago|reply
it is quite tricky as it might show that .1+.2 is 0.3, but this is only due to the fact that default output is a subject of default formatting (i.e. occulting more precision numbers)