(no title)
thatjoeoverthr | 2 months ago
(I considered JAX, but the code in question was not amenable to a compute graph. Another option was to thread by fork, and use IPC.)
I liked the language itself more than expected. You have something like "generics" with tensors. Suppose you pass a parameter, N, and you also would like to pass a tensor, and you would like to specify the tensor's shape (N, N). You can do this; the parameter type constraints can reference other parameters.
Tensors and various operations are first-class types, so the compiler can optimise operations easily for the system you're building on. In my case, I got 80% improvement from ifx over gfortran.
Invocation from Python was basically the same as a C library. Both Python and Fortran have facilities for C interop, and Numpy can be asked to lay out tensors in a Fortran compatible way.
Part of what eased the port was that Numpy seems to be a kind of "Fortran wrapper". The ergonomics on tensor addressing, slicing and views is identical.
prennert|2 months ago
At the time everyone seems to default to using C instead. But Fortran is so much easier! It even has slicing notations for arrays and the code looked so much like Numpy as you say.
foxglacier|2 months ago
I wouldn't call multi-dimensional arrays tensors though. That's a bit of a bastardization of the term that seemed to be introduced by ML guys.
It wasn't until I started using Fortran that I realized how similar it is to BASIC which must have been a poor-man's Fortran.
PaulHoule|2 months ago
If you didn't have vectors, Maxwell's equations would spill all over the place. Tensors on the other hand are used in places like continuum mechanics and general relativity where something more than vectors are called for but you're living in the same space(/time) with the same symmetries.
pantsforbirds|2 months ago
sampo|2 months ago
You can do that, and it might be cleaner and less lines of code that way.
But you don't necessarily need to pass the array dimensions as a parameter, as you can call `size` or `shape` to query it inside your function.
thatjoeoverthr|2 months ago
3uruiueijjj|2 months ago
It also helps that Fortran compatibility is a must for pretty much anything that expects to use BLAS.
thatjoeoverthr|2 months ago
drnick1|2 months ago
thatjoeoverthr|2 months ago
Intel and Nvidia are both offering both C and Fortran compilers, so I was looking at both. I know C well but I decided to not look at it as a presumed default.
When I used C like this in the past, the intrinsics were very low-level, e.g. wrapping specific Altivec or SSE instructions or whatever. I see see it has OpenMP intrinsics, though, which I’m sure I’ll try later.
If I use a library, I’m breaking up the operations and don’t give the optimizing compiler an opportunity to take operations into account together.
With Fortran, I can give the operations directly to the compiler, tell it the exact chip I’m working with and it deals with it.
It would be fun, when I have some time, to go rewrite it in C and see how it compares.