(no title)
panosv
|
8 years ago
That's the point. On the GPU side they use all the 5000+ cores to parallelize the algorithm (they use the hardware to its full potential). On the CPU side they use just one core (at least there is no mention around the cores used on the CPU). It's like saying a Camry beat a Ferrari in maximum speed, but you don't mention that the Ferrari was only in the first gear for that specific race.
visarga|8 years ago
If only! In fact it's a struggle to utilize a GPU to its full potential because the communication bottleneck makes it infeasible. Compute is fast but data can't get there fast enough.
The authors of this paper were saying the same thing in the promo video, in fact, they were working on making GPU's more efficient. Why would they do that if GPU's are using their "full potential" already?