back to article Graphics shocker: Nvidia virtualizes Kepler GPUs

You game-console makers who still want to be in the hardware business, look out. You console makers who don't want to be in the hardware business (this might mean you, Microsoft), you can all breathe a sigh of relief: after a five-year effort,­ Nvidia is adding graphics virtualization to its latest "Kepler" line of GPUs. The …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Thumb Up

    Excellent

    It's nice when you read something that is new and has real world use benifits.

    I also approve of any diagram that can be printed on the back of a fag packet (British slang and not some new networking thang). Though looking at the picture it does look like the GPU can talk to the harddisc without having to go via the CPU, which would be news in itself. But it is one of those diagrams that without the slides prior and labels on some of those number make it open to many interpritations perhaps.

    Still GPU processing will have truely landed when some git releases antivirus software for it. I dread that day for many reasons.

    But bottom line, this has much use and will be just lapped up by the cloud people out there, just hope the software that drives it is flexable and non limited in its application to problems. Though its a nice big step into a direction that is actualy needed at this end of the scale.

    1. Tinker Tailor Soldier
      Holmes

      Re: Excellent

      Don't see why a GPU couldn't directly get the result of a DMA transfer after the host CPU set it up? File Systems would pretty much be a CPU thing though.

  2. Sorry that handle is already taken. Silver badge

    I could see ISPs setting these up, or perhaps partnering with cloud gaming providers, to deal with distance-related latency by connecting their customers directly.

    1. Asgard
      Happy

      One way to reduce latency is regional and local data centres and with the very high computing demands of GPU applications (in for example games) they will need a few data centres per country. That will reduce latency (for *most* users, depending on how they connect). However users will need as much GPU power as 1 current generation GPU card per active user, so I don't see how companies will meet that level of demand.

      For example, Diablo III is crippling servers as we speak and historically new big games very often hammer servers, so if the servers also had to serve real time graphics, they would need to be vastly more powerful servers ... It would be awesome to see that much computing power built into data centres, but I think we are at least a few years from that being practical. But this approach would also provide a vast resource of computing power for not just games, but also for research as well. :)

      But that said, personally I still want my own GPU card and I can't see that ever changing, and for some applications it makes sense to have the GPU in the PC etc.. but for some games I can see some publishers trying to go down this route, but it wouldn't appeal to me for most applications.

  3. Filippo Silver badge

    Latency

    No, it's not true that if you can stream Netflix then you can stream games. Netflix is unaffected by latency and very little affected by lost packets or jitter. Games are horribly affected by both. Those charts which show the Grid outperforming a home console assume 30ms latency, which I've rarely seen in real life. I for one never get better than 150ms, which is more than enough to put a home console way back on top.

    1. TheOtherHobbes

      Re: Latency

      Indeed.

      Anyone with half a brain cell will be thinking 'huh'?

      This is impressively clever technology, but online latency makes it's almost completely useless for performance gaming. (Unless your idea of performance gaming is something like Bejweled, with cooler 3D than usual.)

      A better application would be improved VR UIs of various kinds.

      And it's a natural match for WebGL.

      1. dogged

        Re: Latency

        If you can ensure that everyone gets (almost) the same latency, it ceases to be an issue. Think about the usual MMO problems. Rubberbanding - not a problem. Characters in view - not a problem. Roundtrip calculations - not a problem. Client-side speed/damage/positioning hacks - impossible.

        There's an awful lot of potential here.

        1. Anonymous Coward
          Anonymous Coward

          Re: Latency

          But that's for MMOs. Can you make the latest F1 racer cope with 2-3 second lag on user input?

          Would you want to?!

          I don't see latency or "up time" being fixed any time soon on the UK market. That's before we consider the USA or anywhere else.

          1. dogged

            Re: Latency

            150ms ping time does not equate to 2-3 second user input lag, and you know it.

            reductio ad absurdam doesn't help your case.

            1. Anonymous Coward
              Anonymous Coward

              Re: Latency

              Trying to cast Harry Potter spells won't help your case either.

              1. phr0g
                Happy

                Re: Latency

                I actually LOL'd at that. Thanks. Although he has a point.

            2. Anonymous Coward
              Anonymous Coward

              Re: Latency

              But can you consistently get 150ms ping time (70ms each way, including the line and processing )? Because my pc or games console or even iPhone can. My internet connection? Well, yes it can, many cannot though.

              I might give one of the services a try one day, but my bandwidth charges will always be more than the hardware savings of using these services.

              1. Anonymous Coward
                Anonymous Coward

                Re: Latency

                Sorry, that should be 75ms each way for a 150ms response time.

                1. Tom 38
                  WTF?

                  Re: Latency

                  Do you understand how latency is measured, 'Technical'Ben? I'll give you a hint, 'RTT' does not stand for "Rodent Top Trumps".

                  1. Anonymous Coward
                    Anonymous Coward

                    Re: Latency Tom 38

                    Ok, sorry, I was counting the time between 2 of my inputs. If for example I type a letter on screen, wait for it to show, then type a new letter. When playing a game, you may wait to see what type of response you get before making a new action. No one makes a move in chess before they see their opponents move. Think of it like loading a webpage. You cannot proceed to the next until it has loaded. A delay of 150ms to load a page means you have taken 300ms to get to the second page. The fourth page is 600ms. Compare this to a delay of 20ms. By the forth page you have only taken 80ms to load (ignoring the delay of the user). A small amount of "lag" can add up. This becomes disorientating to some. Mainly for speed and reaction based games. For example driving games. Even normal online play in these is troublesome if the input becomes delayed (rubber banding etc).

                    Just add 150ms delay to your mouse. Then tell me it has no impact to the usability.

                    1. dogged
                      Stop

                      Re: Latency Tom 38

                      @TechnicalBen

                      If your thoughts were accurate, all online gaming would be impossible. Even LAN gaming would be unfeasibly slow.

                      Stop being ridiculous.

                      1. Anonymous Coward
                        Anonymous Coward

                        Re: Latency Dogged.

                        I am only saying online gaming with latency too high becomes impossible. Not all online gaming. I've come across the rubber banding, the delayed reactions and the "paradoxes" that happen when it becomes too great. I just doubt they can avoid all of those with the current network if streaming video as well as input.

                2. Anonymous Coward
                  Anonymous Coward

                  Re: Latency

                  my latency to most uk/EU servers is sub 20ms

  4. Duncan Macdonald

    Corruption ?

    If the GPU has direct memory access to each VM then there is a gaping security hole - a corruption in one VM (software error or malware) could result in the corruption of other VMs. If the GPU does not have direct memory access then the performance is going to be crippled by the need for software to move the data in and out.

    For Nvidia to have a bunch of Kepler chips with only 192 working cores instead of 1536 suggests that their yield problems are very bad (only 1/8th of the cores working).

    1. BinkyTheMagicPaperclip Silver badge

      Re: Corruption ?

      You've got it the wrong way round - the issue would be with crap firmware in the GPU, not with the VM. The hardware access in this case will almost certainly be managed by Directed I/O (VT-d/IOMMU - managed hardware access by the memory controller) and SR-IOV (splitting a device into multiple virtual functions).

      I don't know quite how much of a free for all a modern GPUs have on main system memory, though. The memory controller hub should keep the VMs safe from one another, but if the device firmware is broken and the GPU has a mapping to more memory than it should, then I suppose there could be a hole..

  5. Great Bu

    Inevitably

    The real question, surely, is how many fps can I get on Crysis if I virtualise my GPU through the new nVidia supercomputer they are building at Oak Ridge ?

    1. Ru

      Get with times, granddad

      The world has moved on. Crysis hasn't been the benchmark for a few years now.

  6. Clive Galway
    WTF?

    I call bullshit

    WTF? That slide basically make the claim that most games suffer from 150ms input lag.

    Seriously? Between making an input and seeing the result on the screen is 150ms? I find that very hard to believe.

    Any game coders care to comment?

    Also, the "display" segment of the graph is confusing. Surely it isn't time to render - do they mean display lag? Seems pretty irrelevant though - they have included the same amount of time for all rows.

    Also the "Network" component seems bullshit. Are they comparing GPU or network infrastucture? How "Cloud Gen I" have a 75ms latency but Galkai only 30ms?

    Surely that isn't a fair comparison? If they mean that there is less network latency because other bits are done faster, then that is not network, that is "Game Pipeline" surely. The 30 vs 75ms figure for network implies that Microsoft's network is shit, not that nVidia's GPUs are better. Or does the statement "Nvidia and its online gaming partners think they have the latency issue licked when the GeForce Grid software is running on data centers that are not too far from players" mean that basically they are comparing Galkai running on a local data center vs OnLive running on a datacenter in a different continent? Hardly a fair comparison??

    Sorry, but this just reeks of twisting the truth. Fact of the matter is that for multiplayer games I would never ever use a cloud gaming service - it just isn't possible to compete with players on standalone machines (Unless possibly if the cloud service also hosted the game server, but then you would be limited to BF3 matches only on certain servers).

    1. Tom 38
      Thumb Up

      Re: I call bullshit

      I agree, but I think we're a minority of gamers. I know several people who use OnLive and love it, but I get pissed off in-game if my ping goes above 20*. Thinking about it, they mainly play single player games, which I guess don't rely on beating another humans reactions/ping.

      Most UK players on our UK server have pings between 15ms and 70ms (FTTC/LLU ADSL at one end, certain VM areas (Glasgow in particular) and TalkTalk at the other), most EU players between 40 and 90 (apart from the Dutch, who seem to have the most awesome internet connectivity).

      * Some commentard above said they never see pings below 150ms, which I find astonishing. You would get banned from our servers with that sort of ping, fucking HPBs.

      1. Lee Dowling Silver badge

        Re: I call bullshit

        Although you can see an easy measurement of ping, you can't see the other "hidden" lag.

        The ping is just the time for a single packet (maybe not even a game packet but an explicit ping packet) to reach the server you're playing on (and return, I assume).

        You almost certainly lose 1/60th of a frame no matter what anyway, because of screen buffering and game programming techniques. That's 16.7ms. Then your mouse is probably optical and USB - that could easily add on the same number of milliseconds between you moving it, it being sense, processed and got to the main CPU for it to act on it (but only when it next hits an event loop which could be half the above quite easily - so another 8ms or so!).

        Then the time to send the image down the HDMI cable and the HUMUNGOUS time the LCD might take to process it (even a 5ms full-dark-to-full-bright time means nothing as a lot of modern LCD's "buffer" the screen updates even further inside themselves so although technically true, you could be 2-3 frames behind what you're think you're supposed to see - yet another 35+ms!).

        There's more to responsiveness than ping time. A lot more. But ping time is easily measurable without any human bias at all. The rest aren't. There's no way to reliably measure just how long it takes you to respond to a screen change without taking a reaction test. And there the greatest error contributor is your brain processing and nerve response, which swamps all these other factors anyway.

        I have run game servers (CS 1.6 etc.) for years. I consider 100+ ping unacceptable personally, and 200+ unacceptable on anyone entering the server. But my reaction times in even the quickest test of pressing a mouse button when you see a dot swamp anything that my PC could be waiting for from the network. What they are saying with the "150ms" measurement is that there's an awful lot of other stuff other than ping that affects perceived responsiveness in the average gamer's setup. But it still doesn't mean that they have solved those problems themselves or that their system isn't liable to that same 150+ms "technical" latency.

      2. durandal
        Windows

        Re: I call bullshit

        150ms? Luxury, kids today don't know they're born. When I were a lad, we were up at half five, tying the packets to rats, 'cause we couldn't afford pigeons and even if we could the rats would've et 'em.

    2. scrambled
      Meh

      Re: I call bullshit

      Something else I haven't seen mentioned much in these discussions of streaming games is 'client-side prediction'. If you are playing (most) fast twitch multiplayer games now, and you see 150ms latency, you don't actually EXPERIENCE 150ms latency.

      This is because the client simulates the physics etc of your control input and puts it on the screen straight away, zero lag .. then when the server packets finally arrive it only has to do a correction (and have lag show) if there is a difference between the client prediction and the server simulation, such as cases where you are colliding with another player / being shot etc (where it can't be accurately predicted on the client without other player inputs). So 99% of the time, you are playing lag free.

      With a streaming approach, you lose this trick as far as I can see, and there's no getting round the lag. You also have issues like variable latency which client side prediction solves, so you need another delay layer presumably to buffer the output frames. There's also other issues like the way most games have a fixed (low) tick rate and interpolate the frames shown with a delay of 1 tick, but my tiny brain can't figure out if this would be an issue in the real world.

  7. Alan Bourke

    Console makers will not be using this anytime soon.

    Because there are huge swathes of even the affluent-ish West that don't have anything like decent broadband. Why drastically limit your potential userbase?

    1. Hungry Sean

      Re: Console makers will not be using this anytime soon.

      few reasons I can think of:

      1) no need to sell hardware at a loss

      2) easier to refresh (think how embarrassingly bad consoles are right now compared to PCs-- this is down to a 10 year lifecycle)

      3) you can sell developers a platform where content is actually really honest to god not pirateable and not resellable because those shifty users never get their hands on it.

      4) subscription model means constant and more predictable revenue stream (maybe? It's worked very well for Blizzard).

      I don't know that these outweigh your point, but there has been a trend of increased reliance on internet connectivity from game makers over the last decade (I understand many games won't even run if you aren't connected nowadays due to DRM tomfoolery). Maybe this is a logical continuation.

  8. M. B.

    What could this mean...

    ...for customers who turned away from VDI due to lack of 3D capabilities?

    I would love to deploy thin clients to my shop floor (manufacturing industry) and have my production team be able to load up 3D-rendered drawings from Solid Edge and be able to manipulate them in real-time. That just doesn't happen without good GPUs. This might bring my goal one step closer to reality, worth looking into a bit further.

    1. Anonymous Coward
      Anonymous Coward

      Re: What could this mean...

      It's just streaming video + Remote keyboard and mouse. Actually, it's probably more easily done on a Gigabite network than over a landline. There are some VDI systems that already let you run "Crysis" so you could probably do what you want already.

      I am not sure if most VDI uses other means to send data and packets though. I though the big thing about VDI was that it was not a video streaming service, but much more.

  9. Fibbles

    Suddenly, I'm a lot more interested in cloud computing...

  10. Anonymous Coward
    Anonymous Coward

    Maybe they should fix their chips first?

    If their engineering is so poor that the chip failure rate is about 80% due to engineering issues, Nvidia might want to fix that issue first so they actually have something to sell.

  11. Adam Foxton
    Joke

    A fix for the Latency issue

    would be to TUNNEL THROUGH THE CORE OF THE EARTH ITSELF! *maniacal laughter*

This topic is closed for new posts.

Other stories you might like