vGPUs?

By: --- (---.delete@this.redheron.com), April 15, 2022 7:32 pm
Room: Moderated Discussions
Do GPU vendors (eg ATI, nV, INTC, ARM) implement vGPUs?
These are virtual GPUs not in the sense of hypervisors, but in the sense that SMT provides a vCPU.

You might wonder what's the point, a GPU is already SMT gone mad. Yes, but it's SMT for a single task using a single address space.
Apple have a patent (2012) https://patents.google.com/patent/US9727385B2 Graphical processing unit (GPU) implementing a plurality of virtual GPUs, but only granted 2017, so who knows if it builds on PowerVR stuff, or was implemented when Apple implemented their own GPU.
The patent suggests two reasons this is valuable
- although a standard GPU has many threads, those threads also tend to be correlated, so that if one is waiting on RAM, many may be waiting on RAM since they all started using the same next texture or whatever. vGPUs introduce *more independent* threads into the system that can run when some threads are blocked. This is basically the SMT argument.

- context switching a GPU's application context is very heavyweight and not something you want to do often, and only at particular "safe" times, like after the next frame has been calculated. vGPUs allow for a lighter-weight form of simultaneous execution without the costs of context switching, so that eg the main GPU task can be performed per frame, then whatever time is left till the next frame starts can be sopped up by some background vGPU task. (For this to work, one needs priorities attached to the vGPUs, but that's part of the scheme, as is separate address spaces). At least to some extent the kind of thing they have in mind (on an iPhone anyway) is the "foreground" vGPU is painting the screen, while a "background" is doing something AI or compute related. Of course even on an iPad, let alone a Mac, there's also the issue that, even if the screen is ultimately "owned" by some compositing app, it could have sections (ie "windows") owned by different Metal clients and so one needs some sort of muxing of command streams. If other GPUs do not offer vGPUs, do they perform this muxing at the driver level, giving each command stream an ms or whatever?


I've never heard of this vGPU idea before, so I wondered if it's a "genuine" Apple innovation, or simply their implementation of something that's common.
I think it's much less problematic than the CPU equivalent because GPUs are, of course, throughput engines, so anything that boosts throughput is desirable (as opposed to on CPUs where, trying to compromise a latency engine as also a throughput engine is likely to hurt the latency side). I don't know enough about security (let alone GPUs) to know if there could be security concerns with this sort of sharing.
 Next Post in Thread >
TopicPosted ByDate
vGPUs?---2022/04/15 07:32 PM
  vGPUs?Jukka Larja2022/04/15 09:43 PM
  vGPUs?Mark Roulo2022/04/16 08:46 AM
    vGPUs?---2022/04/16 03:00 PM
      vGPUs?Mark Roulo2022/04/16 05:22 PM
Reply to this Topic
Name:
Email:
Topic:
Body: No Text
How do you spell avocado?