My article about NVIDIA’s new VGX virtualised GPU being a potential holy grail for task- and power-user desktop virtualisation inspired reader comments that are well worth addressing. They also brought out a few details that I didn’t cover in the article. First, let’s address a few of the specific comments. From reader …
This demo was obviously running on a LAN...
First: nope, the demo wasn’t running on a LAN. Grady Cofer from Industrial Light & Magic actually went out to their server farm and made adjustments to the Avengers and Battleship footage on the fly.
Fair enough. What kind of WAN connection was he using? 100Mb/s? 10Mb/s 5? More importantly, what was the latency?
If you're streaming an FPS* game, the maximum tolerable latency (just for the video) would be 33ms (that would give you 30fps** - a rock-bottom standard in FPS* gaming.) I've seen worse (by an order of magnitude) on 10Mb/s connections. You can't buffer the video, because it's (ostensibly) interactive -- you don't know what the next frame will be until you've processed the user input.
For commercial render farms, yes, this is definitely a game changer. But for online gaming? The GPU->NIC path isn't the critical path.The WAN is.
* first-person shooter (not frames per second).
** frames per second (not first-person shooter).
Re: Critical Path
Good point about the LAN/WAN considerations. There wasn't an opportunity during the event to grill them about the specifics of the connection, although I agree that they're the key. From what I understood, the game was hosted on Gaikai servers located somewhere in the valley.
I think you're right, that this is definitely a game changer for render farms and probably for enterprise IT as well. It might also be a substitute for some portion of the game console market too. Time will tell...
Re: Critical Path
Not only that but it'as also important to see the *quality* of what he was doing - I am having hard time imagining it usable for 10-bit, compositing etc over a residential line.
Re: Critical Path
Latency is already an issue with multiplayer First Person Shooters... I don't know enough to guess if a cloud-run multiplayer game would be better or worse overall, though.
I wanted to comment on this paragraph in particular:
JustNiz brings up good points in his comments about how VGX will be used for online gaming. I’ll be addressing at least some of them in an upcoming in-depth blog. Briefly, I think there are reasons to be optimistic. Not every change is for the worse, and I think it’s likely that users will see a better gaming experience in some ways. Servers will run games faster and much more efficiently. Developers will only have to write for one platform, meaning they can put more $$ into either making more/better games or reducing prices.
The downside to this approach is that it makes player mods much more difficult to build and deploy safely. While console players may regard this as no great loss, those of us who have been playing PC games for several years will recognize that this will inevitably lead to a loss in creativity. Off the top of my head, I can think of Counter-Strike, Team Fortress, and Red Orchestra as free mods that became commercial releases in their own right. id software's long commitment to open editing for the Doom and Quake series, Neverwinter Nights' editing tools, the Operation Flashpoint/ArmA series mission editors, Civilization's player tools, Unreal Tournament editors, and Company of Heroes editing tools all extended the viability and replayability of the underlying games.
Heck, entire genres were invented by players. Capture the Flag is just one classic example. It's a game style that shows up in game engines of all sorts these days yet it was originally just another player mod; Threewave's CTF for Quake.
All of this creativity is inherently more difficult to encourage as more technical control is taken away from players. I expect that if the economics of the proposed technology prove themselves, some game companies will take advantage of this format. I expect some will move almost exclusively to it. Frankly, I also don't expect that I'll be too interested in most of those games. :(
(Mine's the one with an old Quake 2 CD in the pocket next to the ArmA2 DVD.)
That's a point that I didn't consider and it's a good one. There isn't much room in this new model for players to get their hands on code and make modifications - at least initially. However, consider this - down the road, why wouldn't (or couldn't) the service provider and game designer give players a mechanism where they can mod code and then play it? They might want to get more $$ from it, of course, or maybe it could be that it's a competitive advantage for them to do it. It's possible - but probably won't happen right out of the gate....unfortunately....
Re: Player mods?
I honestly don't expect to see much in the way of player modification allowed for this model. Companies that like the centralized server approach are far more likely to be looking for some sort of micropayments model instead; get players in cheaply then offer tons of DLC content for $2-10 per item. Letting players do their own mods will make this a much less appealing business model.
This is why I no longer play TF2, for example. When I'm in the mood for some TF, I'll look for a Team Fortress Classic server instead. I would far rather play a game that lets the players run their own servers, build up their own communities, and play the game by the rules that they like. It's a business model that made the initial fortunes of Epic, id software, and Valve.
I realize that my preferences are not necessarily mainstream. That's OK. I've always been a big believer in a diverse marketplace. I think there's plenty of room for both the old school gamers who want all that control, the newer players who prefer the pre-baked solutions, the single player only guys who never bother to get online, and every combination of the above and anything else you can imagine. As long as there's a reasonably free market, we'll all be able to play what we want.
Coming to the point...
So coming to the point, what exactly is the benefit/the use case scenario that justifies doing all of this? Essentially, one would imagine that vgx is all about letting you reap the benefits of centralised/VDI computing while delivering the power of a GPU-per-user using a dense back-end. Well, there's been ways to do that for a while now. ClearCube's blades, for example, allow 8 power users with dedicated CPUs/GPUs and quad monitor displays to be hosted from a 3U chassis in the DC. VDI for power users is what VGX is all about, but how much density can you drive on a single server for power users anyway? *maybe* 12 power users? Is that enough of a premium to warrant shared/VDI scenarios for truly high end users?
Now, you could use vgx for moderate user load scenarios and mid-level non-power users, but again, Aero/DirectX have been supported by Hypervisors for a while, and Aero-remoting has been supported in various software based protocols for a while. So it's not some new form of DirectX/OpenGL accessibility that's the benefit here, merely performance. But is it that simple? For VDI, the issue is not merely providing access to the GPU, but combining that GPU access with adequate general purpose compute power + adequate bandwidth to deliver the pixels to the desktop. Those other two elements are gating factors, not to mention the difficulty of finding/configuring/running modestly priced but high-perf I/O. GPUs aren't the single most important issue for VDI and while nvidia's announcement is welcome, it doesn't drastically change anything.