top | item 20084704

(no title)

oneshot908 | 6 years ago

On the contrary, I think this is one of the biggest emerging blockades to progress in ML/AI research, especially in academia. It has always been more cost-effective to run ML algorithms on consumer HW such as GeForce GPUs and gaming CPUs. It's frequently even faster than contemporary cloud offerings when the consumer HW gets ahead of existing enterprise HW. And it's so effective that HW companies starting changing their EULAs and crippling previously available aspects of APIs to herd AI back into the datacenter where they seem to think it belongs.

And that IMO is a reinvention of the "Walled Garden" of academic HPC (ask any grad student begging and pleading for supercomputer time) which has always sucked and its new commercial incarnation is even worse because it's unclear how to get commercial cloud time on government grants.

OTOH it's fine for large shops like OpenAI, DeepMind, AWS AI, FAIR, MS Research etc because they have deep deep pockets. So if you're content with most future groundbreaking research coming from a small tribe of market leaders, well great, but I suspect innovation is already slowing down because of this.

discuss

order

No comments yet.