Sorry, should have elaborated on that a bit. The K80 is Kepler architecture (from 2012) and underperforms Pascal on most Deep Learning benchmarks. Here's a comparison of several NVIDIA cards and you'll see the K80 at the bottom: http://timdettmers.com/2017/04/09/which-gpu-for-deep-learnin... This article emphasizes consumer cards which means you're building a box ie you lose all the benefits of offloading your work to the cloud. That said, there are cards that are better suited for DL that can run in a datacenter. It's not mentioned here but Pascal includes new GPU instructions designed to accelerate DL operations -- CuDNN is already taking advantage of these new capabilities. Perhaps, more importantly than the K80 card, is the P2 instance type itself, which includes significantly more RAM than you would need for DL (hence the high cost of the instance). In other words, the instance was evidently tuned for general GPU compute tasks like graphics-intensive applications/video encoding and not DL.
coffeepants|9 years ago