The prizes seem pretty low to me: $15k top prize for several months of work requiring a skillset that can easily land a 150k-plus-per-year salary. And no guarantees you get anything for your efforts.
Edit: on the other hand graduate students and postdocs are notoriously underpaid and often have a lot of freedom in how they spend their summer time. I guess that is the target competitor pool?
And it doesn't say which Fortran they are using bog standard GFortran or Intel Fortran with all the parallel extensions.
With this sort of code you also tune it to the machine architecture back when I did F77 for BT / Dialcom the code was tweaked to suit the PR!ME Architecture to get extra speed
Not quite, the submission deadline is already in 8 weeks. And before you see any code, you need to go through their approval process, which they say takes several weeks. So you only have maybe 4 to 6 weeks to get the actual work done...
> As a participant, you’ll need to gain access to FUN3D software through an application process with the US Government.
This is not at all ready for a competition. The least they could do, if they want professional volunteer help, is to open-source the platform that everyone would be deploying on. It was probably built using public money, after all.
Sigh, no shame:
> FUN3D is: Code developed by the US Government at US taxpayer expense, ...
As a Dutch national living in the UK I did get somewhat excited to at least have a look at the code. It is always disheartening to read these exclusion statements for a institution with a global appeal like NASA. At the very least its bad press for them in a global science community.
I'm also somewhat amused by the claim they expect a 1000x performance improvement.
There is sadly a huge amount of Fortran code bases in science. Not old. Not left over libraries. Not software written for a bygone age.
No... New software. Huge, mission critical, core project, software.
The best part: No documentation, no maintainer, and no migration plan.
Scientists think in terms of how Units of Science per Units of Work. Translating code, learning new languages, testing, checking correctness, software validation, bug testing, or even just using external (non-science) libraries yield a very low science/work ratio.
Just opening a text editor, writing a language you already know, and calling it "good enough" is high science/work.
Their method of software validation usually boils down to "spit out your data, graph it, and see if it looks like what you expect".
There's also this notion that "Fortran is fast" that MANY people hold for some reason. They don't know about what kinds of modern compiler optimizations they're missing out on and what new IPC frameworks are available. They know Fortran is fast, MPI is parallel, and that's what they use.
Right now I'm looking at a codebase that is actively being developed and is written in Fortran. It's about 48,466 lines in total. The source is about 2.3M in size. No one seems to have formula listings for it. I want to rewrite it but pulling apart fortran is very difficult.
It's very scary that if I were to write the software in C that it would be seen as modern! This software is tax-payer funded and included in real time systems and production software.
(On that note if anyone is good at Fortran and can document the formulas in this software so I can rewrite them I'd be happy! Please email me)
Well, this is NASA. They are not exactly rolling in cash. Maybe you could look at it as a sort of a voluntarily paid tax, earmarked for space exploration? I, for one, would be happy to donate some of my time for the cause, if they were willing to let me.
> is this the future of work?
Not just the future: many people have contributed to open source projects that have ended up being used in commercial projects, and never received any payment for their work.
Am I reading it correctly? They want people to optimize their code without directly accessing their hardware. This is not how it should be done.
Is it really from NASA? Are they lacking HPC programmers? Or any one knows HPC? Either their code is really bad such that optimizing on your home commodity computer is enough to get it run faster, or they are just making fun of it.
Even if the code gets much faster with their aging clusters, it wouldn't necessary be any better with skylake or knights landing.
Is anyone here able to compare this FUN3D CFD solver with open source CFD software, such as PyFR, Fluidity or OpenFOAM? In particular, has anyone done any benchmark comparisons?
I'm wondering if maybe NASA would be better off over the long term, if they ported their models to a more accessible platform. It sounds like they are having trouble hiring people who can work on their current stack.
This opensource libraries that you mention (and most in general) are focused in generic constructs of methods for the solution of partial differential equations (PDEs).
PyFR - flux reconstruction.
OpenFOAM - Finite Volume.
The list is actually extensive, there are libraries with focus in Finite Element, Finite Difference, Boundary Element, Smoothed Particle hydrodynamics and so on.
The difference here is that this is a solver, not a library. So it is focused on a specific set of PDEs, by doing so you can apply optimizations that are specific to this PDEs and the space representation that it uses, this optimizations are not general to the underlying method of solution (which can actually be a combination of methods), so it will be extremely hard for this generic libraries to compete in performance with a focused solver.
This will add complexity to the challenge somewhat: The cluster uses a mix of E5-2680v4 (Broadwell), E5-2680v3 (Haswell), E5-2680v2 (Ivy Bridge), and E5-2670 (Sandy Bridge) processors.
Each generation has brought different optimisations, tweaked instruction sets, and performance characteristics. You'll have to optimise for the most common scenario.
This is a common problem with modern supercomputers, which are all clusters of some sort. We have a supercomputer at NASA Goddard, much smaller than Pleiades, that gets new "Scalable Units" every few years. The first parts of it are now long gone and what's there is a mix of several recent generations of Intel CPU and some Intel Phis. I'm told that some codes run better on the older CPUs due to caching and other architectural issues.
Nice, they've worked out how to take away their developer's down time.
> To catch integration errors, the suite of codes are repeatedly checked-out around the clock from our central repository, compiled, and several hundred regression and unit tests are run. Email and cell phone SMS provide instant notification of any errors.
Accuracy is only 20% of the puzzle so you could probably grab an easy 80% score by just returning some random numbers right off the bat. Well, now only 70% because you're not going to get the Originality portion anymore. Even without the tongue-in-cheek, this seems like a sloppily concocted competition, but good luck to those who play around with it.
also their us government tie... this is just a coded message for me that i am not welcome because i am too arabic. at least at first glance. probably its not but this is my experience of trying to do anything involving america in the last several years (i can't speak about the trump administration - but i doubt they are more arab friendly given the bad press).
[+] [-] dzdt|9 years ago|reply
Edit: on the other hand graduate students and postdocs are notoriously underpaid and often have a lot of freedom in how they spend their summer time. I guess that is the target competitor pool?
[+] [-] azhenley|9 years ago|reply
[+] [-] walshemj|9 years ago|reply
With this sort of code you also tune it to the machine architecture back when I did F77 for BT / Dialcom the code was tweaked to suit the PR!ME Architecture to get extra speed
[+] [-] photon-torpedo|9 years ago|reply
Not quite, the submission deadline is already in 8 weeks. And before you see any code, you need to go through their approval process, which they say takes several weeks. So you only have maybe 4 to 6 weeks to get the actual work done...
[+] [-] tanderson92|9 years ago|reply
[+] [-] goalieca|9 years ago|reply
[+] [-] devrandomguy|9 years ago|reply
This is not at all ready for a competition. The least they could do, if they want professional volunteer help, is to open-source the platform that everyone would be deploying on. It was probably built using public money, after all.
Sigh, no shame:
> FUN3D is: Code developed by the US Government at US taxpayer expense, ...
[+] [-] martijn_himself|9 years ago|reply
I'm also somewhat amused by the claim they expect a 1000x performance improvement.
[+] [-] flamedoge|9 years ago|reply
[+] [-] mynegation|9 years ago|reply
[+] [-] gravypod|9 years ago|reply
No... New software. Huge, mission critical, core project, software.
The best part: No documentation, no maintainer, and no migration plan.
Scientists think in terms of how Units of Science per Units of Work. Translating code, learning new languages, testing, checking correctness, software validation, bug testing, or even just using external (non-science) libraries yield a very low science/work ratio.
Just opening a text editor, writing a language you already know, and calling it "good enough" is high science/work.
Their method of software validation usually boils down to "spit out your data, graph it, and see if it looks like what you expect".
There's also this notion that "Fortran is fast" that MANY people hold for some reason. They don't know about what kinds of modern compiler optimizations they're missing out on and what new IPC frameworks are available. They know Fortran is fast, MPI is parallel, and that's what they use.
Right now I'm looking at a codebase that is actively being developed and is written in Fortran. It's about 48,466 lines in total. The source is about 2.3M in size. No one seems to have formula listings for it. I want to rewrite it but pulling apart fortran is very difficult.
It's very scary that if I were to write the software in C that it would be seen as modern! This software is tax-payer funded and included in real time systems and production software.
(On that note if anyone is good at Fortran and can document the formulas in this software so I can rewrite them I'd be happy! Please email me)
[+] [-] 52-6F-62|9 years ago|reply
Big frown over here
[+] [-] walshemj|9 years ago|reply
[+] [-] snovv_crash|9 years ago|reply
[+] [-] jacknews|9 years ago|reply
Just optimize our old code, and we'll pay peanuts! If you win, that is.
We were going to hire someone to shephard the old code, but why bother when you geeks will do it for nearly free because it's so cool!!!
OK I have my cynical hat on, and I can't really blame them for trying a cost-effective approach, but is this the future of work?
[+] [-] msl|9 years ago|reply
> is this the future of work?
Not just the future: many people have contributed to open source projects that have ended up being used in commercial projects, and never received any payment for their work.
[+] [-] tanderson92|9 years ago|reply
[+] [-] soreal|9 years ago|reply
I'll pass, but hopefully someone else can do this as a labor of love.
[+] [-] photon-torpedo|9 years ago|reply
Yes, except you are not allowed to run the software on their cluster, instead you are expected to run it on your own machine.
[+] [-] StreamBright|9 years ago|reply
[+] [-] georgiev|9 years ago|reply
[+] [-] jxy|9 years ago|reply
Is it really from NASA? Are they lacking HPC programmers? Or any one knows HPC? Either their code is really bad such that optimizing on your home commodity computer is enough to get it run faster, or they are just making fun of it.
Even if the code gets much faster with their aging clusters, it wouldn't necessary be any better with skylake or knights landing.
[+] [-] devrandomguy|9 years ago|reply
I'm wondering if maybe NASA would be better off over the long term, if they ported their models to a more accessible platform. It sounds like they are having trouble hiring people who can work on their current stack.
[+] [-] fcanesin|9 years ago|reply
The difference here is that this is a solver, not a library. So it is focused on a specific set of PDEs, by doing so you can apply optimizations that are specific to this PDEs and the space representation that it uses, this optimizations are not general to the underlying method of solution (which can actually be a combination of methods), so it will be extremely hard for this generic libraries to compete in performance with a focused solver.
[+] [-] Twirrim|9 years ago|reply
Each generation has brought different optimisations, tweaked instruction sets, and performance characteristics. You'll have to optimise for the most common scenario.
[+] [-] rootbear|9 years ago|reply
[+] [-] cube00|9 years ago|reply
> To catch integration errors, the suite of codes are repeatedly checked-out around the clock from our central repository, compiled, and several hundred regression and unit tests are run. Email and cell phone SMS provide instant notification of any errors.
https://fun3d.larc.nasa.gov/chapter-1.html
[+] [-] lightbendover|9 years ago|reply
[+] [-] m-can|9 years ago|reply
[+] [-] tanderson92|9 years ago|reply
[+] [-] arcanus|9 years ago|reply
[+] [-] jheriko|9 years ago|reply
also their us government tie... this is just a coded message for me that i am not welcome because i am too arabic. at least at first glance. probably its not but this is my experience of trying to do anything involving america in the last several years (i can't speak about the trump administration - but i doubt they are more arab friendly given the bad press).
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] analognoise|9 years ago|reply