This article is more than 1 year old

NVIDIA VGX VDI: New tech? Or rehashed hash?

You comment, we respond

HPC blog My article about NVIDIA’s new VGX virtualised GPU being a potential holy grail for task- and power-user desktop virtualisation inspired reader comments that are well worth addressing. They also brought out a few details that I didn’t cover in the article. First, let’s address a few of the specific comments.

From reader Twelvebore:

21st century X-terminals then. Didn't SGI (Silicon Graphics back then) push this sort of stuff a couple of decades ago?

Absolutely right, Twelve (if I can call you Twelve). One of my former bosses, who also held down a top position at SGI at one time, called that to my attention. According to him, it wasn’t a trivial effort at SGI at the time, but they got it done and delivered it to a few clients who were demanding it. I don’t think it was all that long ago, though – maybe 10 years?

Reader JustNiz talked about gaming:

This demo was obviously running on a LAN, which will not happen in real world application ... Manufacturers will love the relatively cheap cost of parts compared to making a fully featured console but almost certainly won't pass the savings on to the end user, as we are already conditioned to pay $399 for a console ... Software houses will love the fact that end users never get an actual copy of the software (so no pirating). I wonder what they will blame low sales on next. Distributors will love the fact that they can charge users again and again to play the same game ...

These three groups will drive this to replace all current gaming regardless of the fact that its totally worse for the end-user. The populace will just buy this en masse because they've been told to by the advertising...

First: nope, the demo wasn’t running on a LAN. Grady Cofer from Industrial Light & Magic actually went out to their server farm and made adjustments to the Avengers and Battleship footage on the fly.

JustNiz brings up good points in his comments about how VGX will be used for online gaming. I’ll be addressing at least some of them in an upcoming in-depth blog. Briefly, I think there are reasons to be optimistic. Not every change is for the worse, and I think it’s likely that users will see a better gaming experience in some ways. Servers will run games faster and much more efficiently. Developers will only have to write for one platform, meaning they can put more $$ into either making more/better games or reducing prices.

Before you laugh too hard at those last two words, consider this: as this rolls out, we’re going to see new game portals popping up. They’ll be competing on quality of service (performance, RAS, etc), price and game selection. I think the pricing model will be some sort of price per hour of play scheme. So users will be able to play a wide range of games without putting out much money. Game developers will be paid based on how many users play the game – meaning they’ll want to provide a great experience at a “totally worth it, dude” price.

Davidoff didn’t see how this was different from what we’ve seen before:

This is hardly new. Virtualising a GPU is already possible under Windows Server 2008 R2 and Hyper-V Server 2008 R2 with RemoteFX. I think it was HP who put up a demo where someone was playing Crysis on a low-end thin client. I also remember that when El Reg posted an article about RemoteFX [and] the majority of commenters didn't get it. But now Nvidia does it and it's now suddenly the best thing since sliced bread.

This is different than what we’ve seen before. From what I can tell, the closest thing to it is what SGI did in their uber-high-end visualisation HPC boxes several years ago. Reader Phil Dalbeck contributed a technical response...

This is very different than the virtualised hardware GPU offered under RemoteFX or the software 3d GPU offered in Vmware View 5.

Essentially, VGX is a low level instruction path and API that allows a vertical slice of the phyiscal graphics cards resources to be routed through to a VM – by a method similiar to VMware's DirectIO for those who want a read. Basically, the VM has direct, non abstracted access to the physical GPU, together with all that GPU's native abilities and driver calls – ie, Directx11, OpenGL, OpenCL, CUDA... the lot.

The Virtualised GPU in RemoteFX is an abstraction layer that presents a virtual GPU to the VM, with a very limited set of capability (DirectX9 level calls, no hardware OpenGL, no general purpose compute); not only does this not fully leverage the capabilities of the GPU, but it is less efficient due to having to translate all Virtual > Physical GPU calls at a hypervisor level.

Contrary to some comments above – VGX is a real game changer for MANY industries – my only hope is that Nvidia doesn't strangle the market by A) Vastly overcharging for a card that is essentially a £200 consumer GPU B) Restrict[ing] competition by tying virtualisation vendors into a proprietary API to interface with the GPU, thus locking AMD out of the market which is to the longer term detriment of end users (eg, CUDA vs OpenCL).

I couldn’t have put it better myself. And by that, I mean I really couldn’t – I have the technical knowledge and attention span of a ground squirrel. But from what I remember in past VDI products, Phil is spot on. They typically relied on software to provide a virtual GPU that might deliver a decent application experience, but fell short when the loads got tougher.

I also don’t think those solutions scaled very well, meaning that you needed a lot of relatively expensive hardware to support a modest number of users. That’s something that I didn’t get into in my previous article. Neither did I include a picture showing, in a broad way, how this thing works. So here’s a bit more info ...

vgx_picture
vgx_specs

In the top graphic is a simplified picture of the differences between handling a VDI stream with and without VGX. Without VGX, the stream has a longer route back out to the network card, having to run from the GPU back through the CPU and main memory before being sent out to your screen. With VGX, the stream goes directly from the GPU to the NIC card and to your eyes. Fewer hops means less latency – and the less latency the better.

We also have to factor in advances in GPU speed and scalability with Kepler. It has many more cores and performs roughly 3x faster than Fermi. Plus, VGX isn’t just a standard GPU; it’s different in that it has four GPUs (not two like the K20, or one like the K10) and a massive 16GB frame buffer.

We don’t have all of the technical details yet, and certainly don’t have any real-world data about performance, pricing models, or any of that other good stuff. That’ll be coming in time, and we’ll certainly keep asking questions here and prodding there. I’ll also be writing some more about what I learned, and didn’t learn, at GTC12 over the next few days.

cinnamon_roll

I’ll have a lot of time to think about it, too. I’m starting out soon on my 668 mile (1,075km) drive home – plenty of phone and ponder time. On the other hand, it might be like the trip down, where most of my mental cycles were spent watching my nav display track the distance between my current position and the Heaven on Earth Bakery. It’s this little bakery/restaurant in the middle of nowhere. They’ve hooked me with their mega cinnamon rolls that are literally the size of a baby’s head. Can’t wait to get my next fix... ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like