(no title)
silentvoice | 5 months ago
The "numerical" part is a minefield because it will take all your math and demolish it. just about every theoretical result you proved out above will hold not true and require extra-special-handholding in code to retain _some_ utility.
As such I think a language which enables you to go as fast as possible from an idea to seeing if it crosses the numerical minefield unscathed is the one to use, and these days that is python. It is just so fast to test a concept out, get immediate feedback in the form of plotting or just plain dumb logging if you like, and you can nearly instantly share this with someone even if you're on ARM +linux & they are Intel+windows
The most problematic issue with python&numpy, as it relates to learning _numerical_ side of linear algebra, is making sure you haven't unintentionally promoted a floating point precision somewhere (for example: if your key claim is that an algorithm works entirely in a working precision of 32 bits but python silently promoted your key accumulator to 64 bits, you might get a misleaing idea of how effective the algorithm was) but these promotions don't happen in a vacuum and if you understand how the language works they won't happen.
edit: & I have worked professionally with fortran for a long time, having known some of the committee members and BLAS working group. so I have no particular bias against the language
No comments yet.