Yes, CPUs are still the main workhorse for many scientific workloads. Sometimes just because the code hasn’t been ported, sometimes because it’s just not something that a GPU can do well.
Seems stupid to use millions of dollars of supercomputer time just because you can't be bothered to get a few phd students to spend a few months rewriting in CUDA...
>> just because the code hasn’t been ported, sometimes because it’s just not something that a GPU can do well.
> Seems stupid to use millions of dollars of supercomputer time just because you can't be bothered to get a few phd students to spend a few months rewriting in CUDA...
Rewriting code in CUDA won’t magically make workloads well suited to GPGPU.
A supercomputer might cost $200M and use $6M of electricity per year.
Amortizing the supercomputer over 5 years, a 12 hour job on that supercomputer may cost $63k.
If you want it cheaper, your choices are:
A) run on the supercomputer as-is, and get your answer in 12 hours (+ scheduling time based on priority)
B) run on a cheaper computer for longer-- an already-amortized supercomputer, or non-supercomputing resources (pay calendar time to save cost)
C) try to optimize the code (pay human time and calendar time to save cost) -- how much you benefit depends upon labor cost, performance uplift, and how much calendar time matters.
Not all kinds of problems get much uplift from CUDA, anyways.
sometimes the code is deeply complex stuff that has accumulated for over 30 years. to _just_ rewrite it in CUDA can be a massive undertaking that could easily produce subtly incorrect results that end up in papers could propagate far into the future by way of citations etc
londons_explore|2 years ago
Seems stupid to use millions of dollars of supercomputer time just because you can't be bothered to get a few phd students to spend a few months rewriting in CUDA...
bee_rider|2 years ago
> Seems stupid to use millions of dollars of supercomputer time just because you can't be bothered to get a few phd students to spend a few months rewriting in CUDA...
Rewriting code in CUDA won’t magically make workloads well suited to GPGPU.
mlyle|2 years ago
Amortizing the supercomputer over 5 years, a 12 hour job on that supercomputer may cost $63k.
If you want it cheaper, your choices are:
A) run on the supercomputer as-is, and get your answer in 12 hours (+ scheduling time based on priority)
B) run on a cheaper computer for longer-- an already-amortized supercomputer, or non-supercomputing resources (pay calendar time to save cost)
C) try to optimize the code (pay human time and calendar time to save cost) -- how much you benefit depends upon labor cost, performance uplift, and how much calendar time matters.
Not all kinds of problems get much uplift from CUDA, anyways.
otabdeveloper4|2 years ago
Basically, unless you have a very specific workload that NVidia has specifically tested, I wouldn't bother with it.
cmdrk|2 years ago
brnt|2 years ago
hulitu|2 years ago