top | item 23243290

(no title)

bitcrazed | 5 years ago

Hi. PM on Windows & WSL here.

Imagine if you could run AI/ML apps and tools that are coded to take advantage of DirectML on Windows and/or atop DirectML via WSL.

Now you can run the tools you want and need in whichever environment you like ... on any (capable) GPU you like: You don't have to buy a particular vendor's GPU to run your code.

If you're old like me and remember the dark ol' days when games shipped with specific drivers for (early) GPU cards/chips, but failed to run at all if you didn't have one of the supported cards, you'll understand why this is a big deal.

discuss

order

NOGDP|5 years ago

> If you're old like me and remember the dark ol' days when games shipped with specific drivers for (early) GPU cards/chips, but failed to run at all if you didn't have one of the supported cards, you'll understand why this is a big deal.

Maybe I'm not that old, but I'm old enough to remember the days when microsoft was intentionally degrading opengl performance on windows ;).

1bc29b36f623ba8|5 years ago

This. Some games would have a handful of different renderers for different setups, while other games would only support one specific card type (and if you were lucky, a software renderer).

Those days sucked. Bigtime. If we can avoid doing the same mistakes for machine learning then we should.

qayxc|5 years ago

> Maybe I'm not that old, but I'm old enough to remember the days when microsoft was intentionally degrading opengl performance on windows ;).

Which is still nonsense, since this only affected the OGL driver shipped by Microsoft. In contrast to truly bad actors like Apple, OEM were free to ship their own OGL drivers from day 1.

So sorry mate, but I have to call BS on that one.

visarga|5 years ago

Prob. different PM though.

swebs|5 years ago

>Now you can run the tools you want and need in whichever environment you like

Isn't the linked post saying you have to be running on Windows though? It seems like it would make way more sense to either port directX to Linux, or ditch directX and put those resources into supporting Vulcan.

kasabali|5 years ago

Whichever environment you like as long as it's on Windows

lostmsu|5 years ago

Hi!

Don't you think the effort to achieve this would be absolutely massive? I don't know what kind of resources are thrown on this project, but I'd estimate minimum to be 3 dev teams for 2 years just to get a few variations of ResNet to work "as is". And that's just for regular models, that don't require quantization or (auto-)mixed precision for training.

freeone3000|5 years ago

Neither pytorch nor tensorflow support WinML, so this is going to be a bit of a stretch still, since CUDA is still the toolkit of choice for mainstream ML frameworks.