(no title)
heisig | 2 years ago
This is no special casing. Most of that code will also be used for the distributed parallelization. I agree with your remarks on hybrid MPI+OpenMP, and, in fact, Petalisp doesn't use shared memory anywhere, but always generates ghost layers and explicit communication.
> I will also say that it feels a bit weird to call something "peta-" and "HPC" if using more than one socket is relatively far off into the future. For the randomly-wandering PhD students out there, it would be nice to tell them this up front in the Readme :)
I can do that. But let me explain the rationale for naming Petalisp this way: I wanted to create a programming language that is novel, and that has the potential to scale to petaflop systems. And I wanted to create a robust implementation of that language. I think I have achieved the former part, but the latter simply takes time.
Final remark: Good HPC practice is always to get single core performance right, then getting multicore performance right, and then scaling up to multiple nodes. Anything else is an enormous waste of electricity.
No comments yet.