top | item 39686073

(no title)

rkevingibson | 1 year ago

From a cursory look at the code, I don't see any use of fused-multiply-add (FMA) which would likely help with precision issues in a number of places with the float version. The "problem in more detail"[1] readme specifically calls out computation of `x^2 - y^2` as a source for error, and that has methods that dramatically reduce error with FMAs[2].

[1] https://github.com/ProfJski/FloatCompMandelbrot/blob/master/... [2] https://pharr.org/matt/blog/2019/11/03/difference-of-floats

discuss

order

fanf2|1 year ago

The author needs to learn about techniques for rendering deep zooms into the Mandelbrot set. Since 2013 it has been possible to render images that are 2^-hundreds across using mostly double precision arithmetic, apart from a few anchor points calculated with multiprecision (hundreds of bits) arithmetic.

The deep zoom mathematics includes techniques for introspecting the iterations to detect glitches, which need extra multiprecision anchor points to be calculated.

https://mathr.co.uk/blog/2021-05-14_deep_zoom_theory_and_pra...

https://dirkwhoffmann.github.io/DeepDrill/docs/Theory/Mandel...

In my experience of non-deep-zoom rendering, and contrary to the authors arguments, period detection works well for speeding up renders. It appeared to be fairly safe from false positives. https://dotat.at/@/2010-11-16-interior-iteration-with-less-p... https://dotat.at/prog/mandelbrot/cyclic.png

infogulch|1 year ago

I've always wondered how those "Ten minute Mandelbrot zoom" videos worked, because there's no way double would last that long at a tolerable zoom rate.

The perturbation technique is interesting. Calculating just a few points with super high precision and then filling the pixels in between by adding an offset and continuing with lower precision halfway though the calculation seems plausible at a glance, but I'll have to read that more carefully later.

Thanks for sharing!

firebot|1 year ago

I've had some great success using posits for fractals. Unfortunately, soft(ware) posits so rather slow. But mathematically the results were great.

sim7c00|1 year ago

super interesting thanks, really.

teo_zero|1 year ago

But the goal of the study is expressly to highlight precision issues. Using such techniques wouldn't help to make them surface.

jacobolus|1 year ago

I wish someone made up a better explicit FMA syntax than std::fma(-c, d, cd) ... maybe something along the lines of ((-c * d + cd)) with special brackets or (-c) ⊠ d + cd with a special multiplication symbol.

And if only we could effectively use FMAs from javascript...