The big issue here is what you're going to use your numbers for. If you're going to do a lot of fast floating point operations for something like graphics or neural networks, these errors are fine. Speed is more important than exact accuracy.
If you're handling money, or numbers representing some other real, important concern where accuracy matters, most likely any number you intend to show to the user as a number, floats are not what you need.
Back when I started using Groovy, I was very pleased to discover that Groovy's default decimal number literal was translated to a BigDecimal rather than a float. For any sort of website, 9 times out of 10, that's what you need.
I'd really appreciate it if Javascript had a native decimal number type like that.
Decimal numbers are not conceptually any more or less exact than binary numbers. For example, you can't represent 1/3 exactly in decimal, just like you can't represent 1/5 exactly in binary.
When handling money, we care about faithfully reproducing the human-centric quirks of decimal numbers, not "being more accurate". There's no reason in principle to regard a system that can't represent 1/3 as being fundamentally more accurate because it happens to be able to represent 1/5.
> If you're going to do a lot of fast floating point operations for something like graphics or neural networks, these errors are fine. Speed is more important than exact accuracy.
Um... that really depends. If you have an algorithm that is numerically unstable, these errors will quickly lead to a completely wrong result. Using a different type is not going to fix that, of course, and you need to fix the algorithm.
In the world of money, it is rare to have to work past 3 decimal places. Bond traders operate on 32nds, so that might present some difficulties, but they really just want rounding at the hundreds.
Now, when you’re talking about central bank accruals (or similar sized deposits) that’s a bit different. In these cases, you have a very specific accrual multiple, multiplied by a balance in the multiple hundreds of billions or trillions. In these cases, precision with regards to the interest accrual calculation is quite significant, as rounding can short the payor/payee by several millions of dollars.
Hence the reason bond traders have historically traded in fractions of 32.
A sample bond trade:
‘Twenty sticks at a buck two and five eights bid’
‘Offer At 103 full’
‘Don’t break my balls with this, I got last round at delmonicos last night’
‘Offer 103 firm, what are we doing’
‘102-7 for 50 sticks’
‘Should have called me earlier and pulled the trigger, 50 sticks offer 103-2’
‘Fuck you, I’m your daughter’s godfather’
‘In that case, 40 sticks, 103-7 offer’
‘Fuck you, 10 sticks, 102-7, and you buy me a steak, and my daughter a new dress’
‘5 sticks at 104, 45 at 102-3 off tape, and you pick up bar tab and green fees’
‘Done’
‘You own it’
That’s kinda how bonds are traded.
Ref:
Stick: million
Bond pricing: dollar price + number divided by 32
Delmonicos: money bonfire with meals served
Hear, hear! It would be great if javascript had any integral type that we could build decimals, rationals, arbitrarily-large integers and so on off. It’s technically doable with doubles if you really know what you’re doing, but it would be so much easier with an integral type.
'Decimal' is a red herring. The number base doesn't matter. (And what are you going to do when you need currency coversions, anyways?)
Floats are a digital approximation of real numbers, because computers were originally designed for solving math problems - trigonometry and calculus, that is.
For money you want rational numbers, not reals. Unfortunately, computers never got a native rational number type, so you'll have to roll your own.
The type of any numeric literal is any type of the `Num` class. That means that they can be floating point, fractional, or integers "for free" depending on where you use them in your programs.
`0.75 + pi` is of type `Floating a => a`, but `0.75 + 1%4` is of type `Rational`.
Hm... what happens if you've got a neural network trained to make decisions in the financial domain?
Is there a way to exploit the difference between numeric precision underlying the neural network and the precision used to represent the financial transactions?
I'd agree for saner defaults, especially in web development. I can understand that if you want to have strictly one number type it may make sense to opt for floating point to eke out the performance when you do need it, but I'd rather see high-precision as the default (as most expect that you'd be able to write an accurate calculator app in JavaScript without much work) and opt-in to the benefit of floating point operations.
I remember in college when we learned about this and I had the thought, "Why don't we just store the numerator and denominator?", and threw together a little C++ class complete with (then novel, to me) operator-overloads, which implemented the concept. I felt very proud of myself. Then years later I learned that it's a thing people actually use: https://en.wikipedia.org/wiki/Rational_data_type
While it's true that floating point has its limitations, this stuff about not using it for money seems overblown to me. I've worked in finance for many years, and it really doesn't matter that much. There are de minimis clauses in contracts that basically say "forget about the fractions of a cent". Of course it might still trip up your position checking code, but that's easily fixed with a tiny tolerance.
That's one of the worst domain name ever. When the topic comes along, I always remember about "that single-serving website with a domain name that looks like a number" and then take a surprisingly long time searching for it.
I have written a test framework and I am quite familiar with these problems, and comparing floating point numbers is a PITA. I had users complaining that 0.3 is not 0.3.
The code managing these comparisons turned out to be more complex than expected. The idea is that values are represented as ranges, so, for example, the IEEE-754 "0.3" is represented as ]0.299~, 0.300~[ which makes it equal to a true 0.3, because 0.3 is within that range.
Yep. The TL;DR of a numerical analysis class I took is that if you're going to sum a list of floats, sort it by increasing numeric value first so that the tiny values aren't rounded to zero every time.
I feel like it should really be emphasised that the reason this occurs is due to a mismatch between binary exponentiation and decimal exponentiation.
0.1 = 1 × 10^-1, but there is no integer significand s and integer exponent e such that 0.1 = s × 2^e.
When this issue comes up, people seem to often talk about fixing it by using decimal floats or fixed-point numbers (using some 10^x divisor). If you change the base, you solve the problem of representing 0.1, but whatever base you choose, you're going to have unrepresentable rationals. Base 2 fails to represent 1/10 just as base 10 fails to represent 1/3. All you're doing by using something based around the number 10 is supporting numbers that we expect to be able to write on paper, not solving some fundamental issue of number representation.
Also, binary-coded decimal is irrelevant. The thing you're wanting to change is which base is used, not how any integers are represented in memory.
One small tip about printf for floating point numbers. In addition to "%f", you can also print them using "%g". While the precision specifier in %f refers to digits after the decimal period, in %g the precision refers to the number of significant digits. The %g version is also allowed to use exponential notation, which often results in more pleasant-looking output than %f.
One of my favorite things about Perl 6 is that decimal-looking literals are stored as rationals. If you actually want a float, you have to use scientific notation.
Edit: Oh wait, it's listed in the main article under Raku. Forgot about the name change.
The other (and more important) matter, — that is not even mentioned, — is comparison. E. g. in “rational by default in this specific case” languages (Perl 6),
> 0.1+0.2==0.3
True
Or, APL (now they are floats there! But comparison is special)
Please note that Perl 6 has been renamed to "Raku" (https://raku.org using #rakulang as a tag for social media).
In Raku, the comparison operator is basically a subroutine that uses multiple dispatch to select the correct candidate for handling comparisons between Rat's and other numeric objects.
But this isn't a sales pitch. Some people are just bad at things. The explanation on that page require grade school levels of math. I think math that's taught in grade school can be objectively called simple. Some people suck at math. That's ok.
I'm very geeky. I get geeky things. Many times geeky things can be very simple to me.
I went to a dance lesson. I'm terribly uncoordinated physically. They taught me a very 'simple' dance step. The class got it right away. The more physically able got it in 3 minutes. It took me a long time to get, having to repeat the beginner class many times.
Instead of being self absorbed and expect the rest of the world to anticipate every one of my possible ego-dystonic sensibilities, I simply accepted I'm not good at that. It makes it easier for me and for the rest of the world.
The reality is, just like the explanation and the dance step, they are simple because they are relatively simple for the field.
I think such over-sensitivity is based on a combination of expecting never to encounter ego-dystonic events/words, which is unrealistic and removes many/most growth opportunities in life, and the idea that things we don't know can be simple (basically, reality is complicated). I think we've gotten so used to catering to the lowest common denominator, we've forgotten that it's ok for people to feel stupid/ugly/silly/embarrassed/etc. Those bad feelings are normal, feeling them is ok, and they should help guide us in life, not be something to run from or get upset if someone didn't anticipate your ego-dystonic reaction to objectively correct usage of words.
The problem is that almost everything is simple once you understand it. Once you understand something, you think it's pretty simple to explain it.
On the other hand, people say "it's actually pretty simple" to encourage someone to listen to the explanation rather than to give up before they even heard anything, as we often do.
I read the rest of your reply but I also haven’t let go of the possibility that we’re both (or precisely 100.000000001% of us collectively) are as thick as a stump.
I had to use google translate for this one, because I didn't suspect the translation to my language to be so literal.
My take is that this sentence is badly worded. How do these fractions specifically use those prime factors?
Apparently the idea is that a fraction 1/N, where N is a prime factor of the base, is rational in that base.
So for base 10, at least 1/2 and 1/5 have to be rational.
And given that a product of rational numbers is rational, no matter what combination of those two you multiply, you'll get a number rational in base 10, so 1/2 * 1/2 = 1/4 is rational, (1/2)^3 = 1/8 is rational etc.
Same thing goes for the sum of course.
So apparently those fractions use those prime factors by being a product of their reciprocals, which isn't mentioned here but should have been.
Postgresql figured this out many years ago with their Decimal/Numeric type. It can handle any size number and it performs fractional arithmetic perfectly accurately - how amazingly for the 21st Century! Is comically tragic to me that all of the mainstream programming languages are still so far behind, so primitive that they do not have a native accurate number type that can handle fractions.
I still remember when I encountered this and nobody else in the office knew about it either. We speculated about broken CPUs and compilers until somebody found a newsgroup post that explained everything. Makes me wonder why we haven't switched to a better floating point model in the last decades. It will probably be slower but a lot of problems could be avoided.
In JavaScript, you could use a library like decimal.js. For simple situations, could you not just convert the final result to a precision of 15 or less?
From Wikipedia: "If a decimal string with at most 15 significant digits is converted to IEEE 754 double-precision representation, and then converted back to a decimal string with the same number of digits, the final result should match the original string." --- https://en.wikipedia.org/wiki/Double-precision_floating-poin...
This is a great shibboleth for identifying mature programmers who understand the complexity of computers, vs arrogant people who wonder aloud how systems developers and language designers could get such a "simple" thing wrong.
[+] [-] mcv|6 years ago|reply
If you're handling money, or numbers representing some other real, important concern where accuracy matters, most likely any number you intend to show to the user as a number, floats are not what you need.
Back when I started using Groovy, I was very pleased to discover that Groovy's default decimal number literal was translated to a BigDecimal rather than a float. For any sort of website, 9 times out of 10, that's what you need.
I'd really appreciate it if Javascript had a native decimal number type like that.
[+] [-] umanwizard|6 years ago|reply
When handling money, we care about faithfully reproducing the human-centric quirks of decimal numbers, not "being more accurate". There's no reason in principle to regard a system that can't represent 1/3 as being fundamentally more accurate because it happens to be able to represent 1/5.
[+] [-] brazzy|6 years ago|reply
Um... that really depends. If you have an algorithm that is numerically unstable, these errors will quickly lead to a completely wrong result. Using a different type is not going to fix that, of course, and you need to fix the algorithm.
[+] [-] tDude-Sans-Rug|6 years ago|reply
Now, when you’re talking about central bank accruals (or similar sized deposits) that’s a bit different. In these cases, you have a very specific accrual multiple, multiplied by a balance in the multiple hundreds of billions or trillions. In these cases, precision with regards to the interest accrual calculation is quite significant, as rounding can short the payor/payee by several millions of dollars.
Hence the reason bond traders have historically traded in fractions of 32.
A sample bond trade:
‘Twenty sticks at a buck two and five eights bid’ ‘Offer At 103 full’ ‘Don’t break my balls with this, I got last round at delmonicos last night’ ‘Offer 103 firm, what are we doing’ ‘102-7 for 50 sticks’ ‘Should have called me earlier and pulled the trigger, 50 sticks offer 103-2’ ‘Fuck you, I’m your daughter’s godfather’ ‘In that case, 40 sticks, 103-7 offer’ ‘Fuck you, 10 sticks, 102-7, and you buy me a steak, and my daughter a new dress’ ‘5 sticks at 104, 45 at 102-3 off tape, and you pick up bar tab and green fees’ ‘Done’ ‘You own it’
That’s kinda how bonds are traded.
Ref: Stick: million Bond pricing: dollar price + number divided by 32 Delmonicos: money bonfire with meals served
[+] [-] joppy|6 years ago|reply
[+] [-] otabdeveloper4|6 years ago|reply
Floats are a digital approximation of real numbers, because computers were originally designed for solving math problems - trigonometry and calculus, that is.
For money you want rational numbers, not reals. Unfortunately, computers never got a native rational number type, so you'll have to roll your own.
[+] [-] mFixman|6 years ago|reply
The type of any numeric literal is any type of the `Num` class. That means that they can be floating point, fractional, or integers "for free" depending on where you use them in your programs.
`0.75 + pi` is of type `Floating a => a`, but `0.75 + 1%4` is of type `Rational`.
[+] [-] jancsika|6 years ago|reply
Is there a way to exploit the difference between numeric precision underlying the neural network and the precision used to represent the financial transactions?
[+] [-] Gibbon1|6 years ago|reply
Was proposed in the late 90's Mike Cowlishaw but the rest of the standards committee would have none of it.
[+] [-] seangrogg|6 years ago|reply
[+] [-] dspillett|6 years ago|reply
Give it =0.1+0.2-0.3 and it will see what you are trying to do and return 0.
Give it anything slightly more complicated such as =(0.1+0.2-0.3) and this won't trip, in this example displaying 5.55112E-17 or similar.
[+] [-] piadodjanho|6 years ago|reply
[+] [-] FabHK|6 years ago|reply
https://people.eecs.berkeley.edu/~wkahan/Mind1ess.pdf
(and plenty of other rants...:
https://people.eecs.berkeley.edu/~wkahan/ )
[+] [-] _bxg1|6 years ago|reply
[+] [-] dang|6 years ago|reply
2015.000000000000: https://news.ycombinator.com/item?id=10558871
[+] [-] umanwizard|6 years ago|reply
[+] [-] myroon5|6 years ago|reply
[+] [-] mark-r|6 years ago|reply
[+] [-] lordnacho|6 years ago|reply
[+] [-] GuB-42|6 years ago|reply
I have written a test framework and I am quite familiar with these problems, and comparing floating point numbers is a PITA. I had users complaining that 0.3 is not 0.3.
The code managing these comparisons turned out to be more complex than expected. The idea is that values are represented as ranges, so, for example, the IEEE-754 "0.3" is represented as ]0.299~, 0.300~[ which makes it equal to a true 0.3, because 0.3 is within that range.
[+] [-] mc3|6 years ago|reply
Also the "field" of floating point numbers is not commutative†, (can run on JS console:)
x=0;for (let i=0; i<10000; i++) { x+=0.0000000000000000001; }; x+=1
--> 1.000000000000001
x=1;for (let i=0; i<10000; i++) { x+=0.0000000000000000001; };
--> 1
Although most of the time a+b===b+a can be relied on. And for most of the stuff we do on the web it's fine!††
† edit: Please s/commutative/associative/, thanks for the comments below.
†† edit: that's wrong! Replace with (a+b)+c === a+(b+c)
[+] [-] gus_massa|6 years ago|reply
What is failing is associativity, i.e. (a+b)+c==a+(b+c)
For example
(.0000000000000001 + .0000000000000001 ) + 1.0
--> 1.0000000000000002
.0000000000000001 + (.0000000000000001 + 1.0)
--> 1.0
In your example, you are mixing both properties,
(.0000000000000001 + .0000000000000001) + 1.0
--> 1.0000000000000002
(1.0 + .0000000000000001) + .0000000000000001
--> 1.0
but the difference is caused by the lack of associativity, not by the lack of commutativity.
[1] Perhaps you must exclude -0.0. I think it is commutative even with -0.0, but I'm never 100% sure.
[+] [-] thaumasiotes|6 years ago|reply
OK.
You've identified a problem, but it isn't that addition is noncommutative.[+] [-] teraflop|6 years ago|reply
[+] [-] mike_hock|6 years ago|reply
1.0 + 1e-16 == 1e-16 + 1.0 == 1.0 as well as 1.0 + 1e-15 == 1e-15 + 1.0 == 1.000000000000001
however (1.0 + (1e-16 + 1e-16)) == 1.0 + 2e-16 == 1.0000000000000002, whereas ((1.0 + 1e-16) + 1e-16) == 1.0 + 1e-16 == 1.0
[+] [-] kstrauser|6 years ago|reply
[+] [-] maxdamantus|6 years ago|reply
0.1 = 1 × 10^-1, but there is no integer significand s and integer exponent e such that 0.1 = s × 2^e.
When this issue comes up, people seem to often talk about fixing it by using decimal floats or fixed-point numbers (using some 10^x divisor). If you change the base, you solve the problem of representing 0.1, but whatever base you choose, you're going to have unrepresentable rationals. Base 2 fails to represent 1/10 just as base 10 fails to represent 1/3. All you're doing by using something based around the number 10 is supporting numbers that we expect to be able to write on paper, not solving some fundamental issue of number representation.
Also, binary-coded decimal is irrelevant. The thing you're wanting to change is which base is used, not how any integers are represented in memory.
[+] [-] ufo|6 years ago|reply
[+] [-] kps|6 years ago|reply
[+] [-] amyjess|6 years ago|reply
Edit: Oh wait, it's listed in the main article under Raku. Forgot about the name change.
[+] [-] lelf|6 years ago|reply
The other (and more important) matter, — that is not even mentioned, — is comparison. E. g. in “rational by default in this specific case” languages (Perl 6),
Or, APL (now they are floats there! But comparison is special)[+] [-] lizmat|6 years ago|reply
In Raku, the comparison operator is basically a subroutine that uses multiple dispatch to select the correct candidate for handling comparisons between Rat's and other numeric objects.
[+] [-] Athas|6 years ago|reply
[+] [-] DonHopkins|6 years ago|reply
And the length (but not value) winner is GO with: 0.299999999999999988897769753748434595763683319091796875
[+] [-] exegete|6 years ago|reply
[+] [-] jonny_eh|6 years ago|reply
The explanation then goes on to be very complex. e.g. "it can only express fractions that use a prime factor of the base".
Please don't say things like this when explaining things to people, it makes them feel stupid if it doesn't click with the first explanation.
I suggest instead "It's actually rather interesting".
[+] [-] 4ntonius8lock|6 years ago|reply
But this isn't a sales pitch. Some people are just bad at things. The explanation on that page require grade school levels of math. I think math that's taught in grade school can be objectively called simple. Some people suck at math. That's ok.
I'm very geeky. I get geeky things. Many times geeky things can be very simple to me.
I went to a dance lesson. I'm terribly uncoordinated physically. They taught me a very 'simple' dance step. The class got it right away. The more physically able got it in 3 minutes. It took me a long time to get, having to repeat the beginner class many times.
Instead of being self absorbed and expect the rest of the world to anticipate every one of my possible ego-dystonic sensibilities, I simply accepted I'm not good at that. It makes it easier for me and for the rest of the world.
The reality is, just like the explanation and the dance step, they are simple because they are relatively simple for the field.
I think such over-sensitivity is based on a combination of expecting never to encounter ego-dystonic events/words, which is unrealistic and removes many/most growth opportunities in life, and the idea that things we don't know can be simple (basically, reality is complicated). I think we've gotten so used to catering to the lowest common denominator, we've forgotten that it's ok for people to feel stupid/ugly/silly/embarrassed/etc. Those bad feelings are normal, feeling them is ok, and they should help guide us in life, not be something to run from or get upset if someone didn't anticipate your ego-dystonic reaction to objectively correct usage of words.
[+] [-] tomca32|6 years ago|reply
On the other hand, people say "it's actually pretty simple" to encourage someone to listen to the explanation rather than to give up before they even heard anything, as we often do.
[+] [-] throwaway40324|6 years ago|reply
Yep, I've thrown 10,000 round house kicks and can teach you to do one. It's so easy.
In reality, it will be super awkward, possibly hurt, and you'll fall on your ass one or more times trying to do it.
[+] [-] headmelted|6 years ago|reply
I read the rest of your reply but I also haven’t let go of the possibility that we’re both (or precisely 100.000000001% of us collectively) are as thick as a stump.
[+] [-] Tade0|6 years ago|reply
My take is that this sentence is badly worded. How do these fractions specifically use those prime factors?
Apparently the idea is that a fraction 1/N, where N is a prime factor of the base, is rational in that base.
So for base 10, at least 1/2 and 1/5 have to be rational.
And given that a product of rational numbers is rational, no matter what combination of those two you multiply, you'll get a number rational in base 10, so 1/2 * 1/2 = 1/4 is rational, (1/2)^3 = 1/8 is rational etc.
Same thing goes for the sum of course.
So apparently those fractions use those prime factors by being a product of their reciprocals, which isn't mentioned here but should have been.
[+] [-] garyclarke27|6 years ago|reply
[+] [-] Ididntdothis|6 years ago|reply
[+] [-] combatentropy|6 years ago|reply
[+] [-] ChuckMcM|6 years ago|reply
[+] [-] skohan|6 years ago|reply
https://swift.org/blog/numerics/
[+] [-] mvelie|6 years ago|reply
[+] [-] enriquto|6 years ago|reply
[+] [-] gowld|6 years ago|reply