(no title)
antonyme | 6 years ago
So as far as I'm concerned, whatever performance cost these alternate methods may have, it would be well worth it to avoid the pitfalls of IEEE floats. Intel chips have had BCD support in machine code; I'm surprised nobody has made a decent fixed point lib that is widely used already.
rrss|6 years ago
If you don't care about performance, then the actual solution has no dependency on hardware:
1. Replace the default format for numbers with a decimal point in suitably high level languages with a infinite precision format.
2. Teach people using other languages about floating point and how they may want to use integers instead.
The end. No multi-generation hardware transition required.
IMO, IEEE 754 is an exceptionally good format. It has real problems, but they aren't widely known to people unfamiliar with floats (e.g. 1.0 + 2.0 != 3.0 isn't one of them).
IEEE754|6 years ago
Unlimited precision in any radix point based format does not solve representation error. If you don't understand why:
If you are truly only working with rational numbers and only using the four basic arithmetic operations, then only a variable precision fractional representation (i.e a numerator and demonstrator, which is indifferent to underlying base) will be able to store any number without error (if it fits in memory). Of course if you are using transcendental functions or want to use irrational numbers e.g PI then by definition there is no numerical solution to avoid error in any finite system.dmd|6 years ago
antonyme|6 years ago
I'm not suggesting we replace all our current HW with chips that implement posits (let's fix branch prediction first!!). More that FP should be opt-in for most HLLs.
piadodjanho|6 years ago
They have the survey online at [1] in case you want to see how much you know about fp behavior.
[1] http://presciencelab.org/float
prirun|6 years ago
DannyB2|6 years ago
A 64-bit integer is big enough to express the US National Debt in Argentine Pesos.
toolslive|6 years ago
It's actually logical: the number of developers doubles roughly every 5 years. It means that half of the developers have less than 5 years of experience. If they don't teach you this in school (university), you will have to learn from someone who knows, but chances are the other developers are as clueless as you.
simonh|6 years ago
IEEE754|6 years ago
Note that fixed radix point does not solve the common issues with representing rational base 10 fractions. A base10 fixed radix solution would, but so would IEEE754's decimal64 spec, which would eliminate representation error when working exclusively in the context of base10 e.g finance, but these are not found in common hardware and do not help reduce propagation of error due to compounding with limited precision in any base.
PaulHoule|6 years ago
The numerator is just an integer and integers are just integers and the base doesn't matter. But if the exponent is base 2, then you can have 1/2, 1/4, 1/8 on the base but not 1/5 or 1/10.
microcolonel|6 years ago
However, as soon as you start doing anything interesting, you have limited precision as a matter of course.
mcv|6 years ago
Sadly most languages don't support something like that out of the box.
ScottFree|6 years ago
Where would one go to better understand how floating points are represented?
kps|6 years ago
piadodjanho|6 years ago
> I shudder to think how many e-commerce sites use `float` for financial transactions!
The float IEEE-754 represent up to 9 decimal digits (or 23 binary digits) with precision. The double, represent 17 decimal digits. The error upper bound is (0.00000000000000001)/2 per operation. Likely irrelevant for most e-commerce.
Also, the database stores in currency values using fixed point.
> Intel chips have had BCD support in machine code
BCD is floating point encoding not fixed point. AFAIK, only Intel supports it and very precariously.
> I'm surprised nobody has made a decent fixed point lib that is widely used already.
Nonsense. If you do any scientific computation you have likely have Boost, GMP, MPFR installed in your system. They support arbitrary precision arithmetic with integer (aka fixed point), rational and floating point.
antonyme|6 years ago
LOL, sure ok. Worked on banking systems for 2 years and been doing scientific computing for many more. Pretty comfortable with fixed and floating point.
> [error bounds] Likely irrelevant for most e-commerce.
Those bounds are theoretical, and there are plenty of occasions I have come across in the past when rounding errors were observed. It was forbidden in the bank to use floating point! We went to enormous lengths to ensure numerical accuracy and stability across systems.
I think this article has a pretty good explanation:
https://dzone.com/articles/never-use-float-and-double-for-mo...
> Nonsense. If you do any scientific computation you have likely have Boost, GMP, MPFR installed in your system. They support arbitrary precision arithmetic with integer (aka fixed point), rational and floating point.
Yes, absolutely right; I have used several of those 3rd party libs myself, as well as hand-rolling fixed point code (esp for embedded systems). I didn't write what I intended. I meant to say that very few languages have first-class fixed point in their standard library. So long as the simple `float` is available as a POD, people will (mis-)use it.
I think in a general purpose HLL, a fixed decimal type should be the default, and you should have to opt in to IEEE-754 floating point.