(no title)
tsurba | 9 months ago
My Jax models and the baseline PyTorch models were quite easy to set up there, and there was not a noticeable perf difference to 8x A100s (which I used for prototyping on our university cluster) in practice.
Of course it’s just a random anecdote, but I don’t think nvidia is actually that much ahead.
pama|9 months ago
nickpsecurity|9 months ago
They'd probably have to spend $5k-20k on a multicore or NUMA-style box to get huge gains on multithreaded code. They also loose the cool factor of saying they're using a RTX. Maybe grant money if it's tied to GPU use. Between the three, it might make sense, even financial sense, to get a sub-$2000 GPU to accelerate academic code that barely uses the GPU.
I'm just brainstorming here, though.