Critical question here...
Can this be used without VMWare, more specifically with KVM, Virtualbox under Linux?
Yes I'm thinking about running games on a VM.
If This is ESXi only, how well can you play over the Local LAN?
AMD has used the VMworld conference in San Francisco this week to take wraps off a new, hardware-based GPU virtualization tech for virtualized workstations. Known as the AMD Multiuser GPU, the chipmaker claims it can allow up to 15 virtualized desktops to share a single graphics processor without any loss of performance. The …
I've been running games under kvm VM for quite some time, with a dedicated GPU passed through to a VM with vfio. It works pretty well, the gotcha is that the picture is sent to GPU own monitor ports (i.e. you need local monitor attached to GPU). Such dedicated GPU passthrough with vfio will not allow partitioning of the GPU between clients - it is simply attaching physical PCIe port (where GPU is inserted) to a VM own "virtual PC", thus giving it exclusive access to GPU.
What AMD is doing here (and what nVidia has done with GRID before), is to render the picture in GPU memory and then pack-and-send it over the network for a VM client to display on own monitor (obviously some of this work is done at the hypervisor). And also to share the GPU between many clients, thus giving each a slice of its silicon as a "virtual GPU".
I've tried on multiple occasions to get PCI passthru of video to work over the years. With both KVM and Xen and never had any success. It's a bit tricky to get the right combo of hardware. Not to mention the software itself was very early days when I tried it. Eventually I gave up and just built a separate gaming machine. Never tried vfio as support for it was on the bleeding edge kernel at the time.
When I read about Nvidia's GRID I was hopeful this would become less of a fringe use case but their software stack was targeting the datacenter. A hardware based solution is better as it's more likely to be something worked into open source drivers. But as you say these solutions virtualize the hardware itself instead of simply passing the whole card to a VM. Depending on how versatile AMD's solution is it might be worth considering. Give the host 10% of a GPU and 90% to a gaming VM?
Now if we just had a standardized way of doing this. AMD and Nvidia will be at it again trying to claim solution supremacy.
Of course to me this is all becoming a moot point with Win10. I'm hesitant to even run that thing in a VM.
hah, Win10. The only way I am going to install this thing is in a VM, with GPU pass-through, under kvm with vfio. The technology is mature and performs very well, if you are willing to upgrade to recent kernel, qemu, libvirt and have enough CPU cores and memory to run a VM without overcommitting. Pinning vCPU, hyper-v enlightenment (caution, this plays well with AMD cards only, because nVidia has been crippling drivers for its consumer products), huge memory pages, isolated cores in kernel command line and, if you have more than 1 CPU socket, matching passed-through PCIe sockets with vCPU placement, are few tricks to good VM performance. Another good application for vfio pass-through is USB3 controller, BTW.
You store your data in the "Cloud" (aka racks of disks in some datacentre), domestic computing is moving further and further away from a bulky box and toward sleek little tablets and laptops with greater need for form-factor and low energy-usage over processing power. We're now at the point where you can connect your phone into a monitor and keyboard and use it as a computer. Just, like William Gibson said, the future is not widely distributed yet. MS claim they have a way of you encrypting remote processes in a secure way (we will see). Oh, and really fast Internet connections are becoming more common.
So if you have the bandwidth and low-latency, you can get the basics (hooking up the peripherals and providing an OS) with a small, light device and your data is non-local anyway... What's left that needs to be done locally? Well, graphics I guess... What's that you say, AMD?
Queue angry objections by those who love their big fat desktop. Loud and a diminishing minority.
"Queue angry objections by those who love their big fat desktop. Loud and a diminishing minority."
Funny my Core i7 built in 2010 is still going strong, however now I want to take advantage of USB 3 and PCIe 3 and the Skylake chips finally look to be a decent leap ahead for me to upgrade.
See now that software bloat isn't killing CPU's since the Core series came out, people haven't been upgrading as often.
Also I don't know about you bu I find it much easier working off my two 24" LCD's than a piddly notebook screen and keyboard, especially with some of the keyboard layouts you get with your supposedly superior notebook type keyboards. especially those ones with a joypad type layout for the arrow keys, whoever thought those were a good idea should be ejected into the sun, which would be the brightest idea they will ever have.
>>"Funny my Core i7 built in 2010 is still going strong, however now I want to take advantage of USB 3and PCIe 3 and the Skylake chips finally look to be a decent leap ahead for me to upgrade."
It's a modest upgrade only in terms of power. IPC increases have been on the order of around 4-6% with each generation change which is a far cry from the old days. It's really a pittance. Where improvements have been pretty big is in terms of power-efficiency. That has been Intel's focus (insofar as they actually care now that they've all but buried AMD at the medium to high-end). Which is what I've been saying - their focus has switched to mobile devices. Offload the heavy computing and focus on something most people prefer which is convenience. The fact that your 2010 i7 is still adequate for most people's use illustrates my point. If home desktops were a healthy market, you wouldn't see performance improvements sitting in the doldrums for the last half-decade and the manufacturers obsess over reductions in TDP.
>>"See now that software bloat isn't killing CPU's since the Core series came out, people haven't been upgrading as often."
I'm not sure exactly how that addresses my point but a big part of the reason they haven't been upgrading so much is because there's little to upgrade to. If you have a 4870K then what do you actually get out of going to a 5830? Not much. To Skylake? Not much. It's stagnated in every area except IGPs (which brings us back to the focus on non-desktop) and power consumption (again, a non-desktop priority). Intel are many terrible things, but stupid they ain't. They chase the money.
>>"Also I don't know about you bu I find it much easier working off my two 24" LCD's than a piddly notebook screen and keyboard, especially with some of the keyboard layouts you get with your supposedly superior notebook type keyboards. "
Where the Hell did you get 'supposedly superior notebook type keyboards' from? You seem to have missed what I actually wrote which was that you can connect your mobile device up to monitors and keyboards. You can run those two 24" monitors fairly comfortably from a Surface Pro.
Even my tablets* can be hooked up tp my 2560 x 1600 display and bluetooth connect to any of the dozen plus I/O devices I've got laying around (pure keyboards, mice, game controllers, and a lot of mutants), and be in bed, on a couch, the floor, haven't tried the walls or ceiling though. The point is that I finally n do this, even lend out one of the serious machines. But it's difficult (especially financially) to do what should be easy to accomplish. Hell, both machines are literal supercomputers, TFLOPS and one hums along at 4.8 GHz (Quad-core) and the other has twin 2.66 GHz six-core Xeons. Max, max speed memory.
I've given on ever seeing Microsoft produce something usable for the enthusiasts' market.
Hmmm.... have you forgotten FreeSync?
Now Intel supports it as opposed to NVidia Not-so-freeSync.
And at one time it too was coming soon!
And of course there is HBM!! Delivered ON TIME and UNDER BUDGET. Who would have thunk a top flight dGPU for $699!!!
And of course HBM2 is also coming soon!!
So... your point is...?
"While as many as 15 typical workers can share the same virtualized chip for Office-type applications,"
Just how much graphics acceleration does an "Office-type application" actually need in the first place? The only animation they've had was Clippy - and I'm not sure supporting that is really a step forwards!
Most graphic workflows do not need a workstation dGPU card.
A good midrange gaming card will do quite nicely.
This is especially true with engineering and architecture firms, they do not need to build-out workstations for every draftsman in the firm. Just the power-users or top modelers and renderers.
The rest can get by quite nicely with an APU and gaming dGPU running in CrossFire mode.
Biting the hand that feeds IT © 1998–2019