top | item 40358036

(no title)

jhiesey | 1 year ago

Agreed. Virtualized 3d acceleration in particular still has quite a bit of "secret sauce" left in it.

discuss

order

gorkish|1 year ago

Today this is mostly implemented by having a guest driver pass calls through to a layer on the host that does the actual rendering. While I agree that there is a lot of magic to making such an arrangement work, it's a terrible awful idea to suggest that relying on a vendor's emulation layer is how things should be done today.

Proper GPU virtualization and/or partitioning is the right way to do it and the vendors need to get their heads out of their ass and stop restricting its use on consumer hardware. Intel already does; you can use GVT-g to get guest gpu on any platform that wants to implement it.

AshamedCaptain|1 year ago

So you say having a decoupled arrangement in software (which happens to be a de facto open standard) is a "terrible awful idea" and that instead you should just rely on whatever your proprietary hardware graphics vendor proposes to you? Why?

And that's assuming they propose anything at all.

Even GVT-g breaks every other Linux release, is at risk of being abandoned by Intel (e.g. how they already abandoned the Xen version) or limited to specific CPU market segments, and already has ridiculous limitations such as a limit on the number of concurrent framebuffers AND framebuffer sizes (why? VMware Workstation offers you an infinitely resizable window, does it with 3D acceleration just fine, and I have never been able to tell if they have a limit on the number of simultaneous VMs... ).

In the meanwhile "software-based GPU virtualization" allows me to share GPUs in the host that will never have hardware-based partitioning support (e.g. ANY consumer AMD card), and allows guests to have working 3D by implementing only one interface (e.g. https://github.com/JHRobotics/softgpu for retro Windows) instead of having to implement drivers for every GPU in existence.