top | item 23848902

(no title)

_bxg1 | 5 years ago

> we had a huge ordeal converting our 32-bit space weather model to 64-bit because the change in precision changed our results, so we couldn't publish (in good conscious) without making sure the results weren't within a good margin of error

This is a really interesting dimension to the transition that I've never considered before. An actual change in correctness, not just compatibility. I would have thought for floats it would just add more decimal points

discuss

order

giardini|5 years ago

One of our numerical analysis(NA) professors had worked at NASA, at IBM and had enormous real-world experience in NA. Several weeks into the class he challenged us to write a correct square root routine in FORTRAN. Over the next two weeks submissions were presented in class, where our professsor showed how each had multiple critically fatal errors. He let it stew awhile: two weeks before semester's end he presented and explicated to us a correct implementation of the square root, which turned out to be far more lengthy and involved than we had imagined. He said there was always the possibility of more bugs being present. That was really an eye-opening class.

Animats|5 years ago

I saw that done years ago in assembler for UNIVAC mainframes. Tear apart the exponent and mantissa. Shift mantissa if exponent is odd. Shift exponent right one bit to get square root of exponent. Look up starting guess from table using a few high bits of mantissa. Run two iterations of Newton's method. Reassemble mantissa and exponent. Normalize. Done.

blackrock|5 years ago

Post the code here, and educate us.

codezero|5 years ago

FWIW, I didn't bother getting too deep into an analysis (I was an undergrad and not really savvy enough to deep dive, but maybe I could now if I went back :) )

Our model was a space weather model of the solar wind speed and density - it was very iterative, and I think that iterative nature is what contributed to quirks in the transition - think of the issues when summing say, 0.1 ten times in Python (or other languages) - ultimately, we didn't need to do any real work to show that our converged model was within a margin of error and still presented the same results in terms of predicting solar flares - but it was still a lingering curiosity that the numbers weren't exactly the same (as you say!) - most of the work was done in finding the right compiler parameters to not break existing code, while taking advantage of the larger memory space.

_bxg1|5 years ago

Ah that makes sense