Another way to explore floating point representation can also be to compare rounding modes, as this can impact a lot of what is generated for floating point operations. Most systems are round to nearest even, but round to zero, round towards positive and round towards negative also exist.
Bonus points if you also check results for x87 floats.
Sesse__|2 years ago
Edit: I see the supplementary material addresses this: “One could attempt to avoid the problem by using arbitrary-precision floating point representation, or rational representation (using two integers, for numerator and denominator, i.e., keep everything as fractions), but that quickly leads to so many significant digits to multiply in every iteration that calculations become enormously slow. For many points in a Mandelbrot Set Image, the number of significant digits in a trajectory tends to double with each iteration, leading to O(n^2) time complexity for evaluating one point with n iterations. Iterations must then be kept very low (e.g., 20-30 on a modern PC!), which means that many higher-iteration points cannot be evaluated. So accumulated round-off error is hard to avoid.” 20-30 sounds crazy low even for O(n²), though.
Edit 2: This part seems wrong: “Cycle detection would work if floating point quantities could be compared exactly, but they cannot.” They certainly can. If you detect a cycle, you will never escape _given your working precision_. Floats can cycle just as well as fixed-point numbers can. But I suppose true cycles (those that would be cycles even in rationals) are extremely unlikely.
hedora|2 years ago
https://www.fractint.org/
From wikipedia:
> The name is a portmanteau of fractal and integer, since the first versions of Fractint used only integer arithmetic (also known as fixed-point arithmetic), for faster rendering on computers without math coprocessors. Since then, floating-point arithmetic and arbitrary-precision arithmetic modes have been added.
drdeca|2 years ago
if 0 < a_L < a < a_R , 0 < b_L < b < b_R , then (a + b i)^2 = (a^2 - b^2) + 2 a b i , and a_L^2 - b_R^2 < a^2 - b^2 < a_R^2 - b_L^2 and 2 a_L b_L < 2 a b < 2 a_R b_R ,
uh,
(a_R^2 - b_L^2) - (a_L^2 - b_R^2) = (a_R^2 - a_L^2) + (b_R^2 - b_L^2) = 2 (a_R - a_L) ((a_L + a_R)/2) + 2 (b_R - b_L) ((b_L + b_R)/2)
And, a_R b_R - a_L b_L = a_R b_R - a_R b_L + a_R b_L - a_L b_L = a_R (b_R - b_L) + (a_R - a_L) b_L
So, looks like the size of the intervals are like, (size of the actual coordinate) * (size of interval in previous step), times like, 2?
groby_b|2 years ago
I mean, 30 means 2^30 sig digits. That means any operation on those are a bit expensive in pure memory throughput. Squaring that, let's use an FFT then it's just O(n log n) for the next step, with n being number of digits here. Or 30 * 2^30 ops, let's call it 2^35 ops. We're at 32 MFlops for a single dot. Let's only do a 1280*768 image, that's 32 GFlops to compute that image.
That's a perfectly nice rate, let's pretend it's half a second because all memory access was just perfectly sequential. (It's not, it's more).
Each additional step more than doubles the time spent. Which means even at 6 more iterations, we're staring at half a minute compute time. Nobody waits half a minute for anything. (Unless it's a web page downloading JS)
Now add the fact that I lied, memory access can't be fully sequential here. But even if it was, we're at least spilling into L3 cache, and that means some 40 cycles penalty every time we do. We'll do that a lot. So multiply time with 10 or so (I'm still hopelessly optimistic we have some efficiencies.), and suddenly even 30 iterations cost you 5 seconds.
And so, yes, it's really expensive even for modern PCs.
If you have a GPU, you can shuffle it off to the GPU, they're pretty good at FFTs - but even that's going to buy you, if we're really really optimistic, another 20 or so iterations before it bogs down. Doubling digits every cycle will make any hardware melt quickly.
zokier|2 years ago
BlueTemplar|2 years ago
I thought that 0.5 ought to be rounded to 1, mirroring 0.0 rounded to 0 !
But I can now see how this only works for reals, while anything non-continuous (even rationals ?) could end up with compounding errors...
rocqua|1 year ago