For Windows guest - KVM or XEN and which distro for host?

The time is slowly approaching for me to rebuild my home PC. Since I've been using Linux at work for the past few years (ssh only, and happy with it), I want my new machine to be running Linux as a host system, and Windows on top of it. Not full migration because I've invested good money into software which is …

This topic was created by Bronek Kozicki .

Silver badge

Re: Absurdly complicated rube goldberg "solution"

If the host is running headless and Windows is given PCI passthrough to GPU (and some USB controllers), I do not quite see this much potential for problems. Yes a lot can go wrong, but not as much as you seem to think.

On the other hand keeping one box on top of another, when the children are fidgeting just next to this stack of boxes, or even better trying to sit on it .... that sounds like a lots of fun ;) I simply do not have the space for two boxes and dual boot is also out of the question, period.

1
1
Headmaster

Data storage for shared systems

I've used Btrfs a fair amount as a Debian developer, in order to take advantage of some of its features such as snapshotting and RAID. It has some nice features, but to be completely realistic, it's not anywhere near ready for production use, and isn't likely to be for seveal years at least. I've had unrecoverable data loss and multiple kernel oopses/panics (though for experimental stuff, so not serious for me). You can't trust the RAID code; a SATA cable glitch made it toast both the transiently failing drive *and* the mirror, turning both in to unusable messes which panicked the kernel when you tried to mount them. Coupled with the lack of a usable fsck tool, it would be foolish in the extreme to trust this filesystem with important data. Depending upon the usage patterns, performance can also be awful, though it can be very good. Now, I use it intensively for snapshotted source build environments (schroot), but that's transient stuff I can recreate in minutes should it get blown away. It may become trustable with time, but at present I don't consider it anwhere near that. I think SUSE may regret making it the default; if their users want to be guinea pigs finding all the remaining bugs, good luck to them!

For Linux, there's the plain and boring ext4, or xfs, or zfs with the appropriate kernel patches. And others as well, but if reliability is the goal, one of those is a good choice.

One of the most annoying things I've found with both virtualisation and multiboot systems is data storage. You inevitably end up with data spread over multiple systems, duplicating stuff and also wasting tons of space due to disk partitioning and having neither system being able to safely/efficiently access each others' filesystems, especially when using VMs and they may be mounted already. My suggestion for this is to move all the user data off to another system. I got a small HP ProLiant microserver, and put FreeBSD10/ZFS on it. Now all the data is available to any system via NFSv4/CIFS, which works nicely for all of: single native OSes, dual-/multi-boot systems, and VMs. ZFS also eliminates inefficiencies of partitioning wasting space: all the data is in a big pool split into datasets (which can have quota/size limits). The choice of OS for this was really down to native ZFS support, including in the installer, and a desire for something new to play with. Debian GNU/kFreeBSD can also do this.

With the above setup I have EFI/GRUB2 booting of Debian GNU/Linux, Gentoo, Windows7 and BIOS booting of Debian GNU/kFreeBSD and FreeBSD10 on and amd64 system, and OpenFirmware boot of Debian GNU/Linux and FreeBSD10 on a powerpc system, all using the same user filesystems. There are also a fair number of kvm and VirtualBox images, all of which can also take advantage of the shared storage. The local storage can then be small: it's just the basic OS install and temporary scratch space.

2
0
Silver badge

Re: Data storage for shared systems

I have MegaRAID 9265-8i and can buy some extenders to it - that gives me ridiculous amount of storage (and the single case I have here is large enough for at least 8 HDDs). So this is not going to be a problem, probably.

You gave me good reason to consider ZFS again, thanks for that. Perhaps with enough RAM given to host it will work fine.

1
0

Re: Data storage for shared systems

Indeed, ZFS is very much the way forward.

0
0
Bronze badge
Holmes

How about VMware?

With WMware vSphere you can run several virtual machines side by side, and you get a nice graphical GUI with a thin client on your desk top. Memory and CPU power will be allocated to the OS that needs it. The alternative would be to run LInux in a VM on Windows 2012, but I am unsure what that will do to your performance. The members of your family can have a personal VM each.

The best performance is when each VM gets its own disc volumes as needed. For shared data, you can use a separate Linux FS and use Samba, NFS, or AFS to export it to the other VMs.

0
0

two boxes

Would never ever use a box for working AND games, especially if somebody else is fiddling with it.

Also I am not convinced that an virtual machine can run a game smoothly enough, no matter what hardware used.

I think I would use a dedicated Linux box with the needed hadware, store it safely where no one can get to it and access it cheap and dirty via VNC...

0
0

I would suggest staying away from KVM and going for Xen instead. KVM a crap support when it comes to USB as far as my experience is concerned. KVM is the most resource efficient, but Xen is a lot more stable and mature. It also works a lot better for USB and PCI devices.

Be very careful on what hardware you get and make sure you research it's support for PCI sharing, as this will be the most difficult bit and is what will make or break your rig, so to speak.

ATI as the lousiest support for visualization in the world, so I'd suggest sticking to Nvidia cards for graphics.

0
0
Silver badge
Linux

Have to say thanks..

I love what you're trying to do and would love to see the result, or at least see a good write up on the finished product - and not just a how-to or basic list of configs like many are. Sounds like an interesting project.

I've learnt a bit more about what's there VM wise myself. I've used them a fair bit over the last few years, but never really pushed them - only using them to run test/safely infectable(sp) installs of Windows and test/learn software and so on. But now am inspired to take a deeper look into what's there now :)

Have to thank you for your questions, and for putting up with the people who haven't read your message, like the important bits where you say you don't have room for another machine and so on, and go on to tell you why you really only need a second box when you've made it clear that's not an option :)

That said, if you're letting others (especially the kids) get on you really should think of ways to give them a totally seperate machine, even a laptop or cheap tablet. I've seen too many cases where kids manage to screw up machines quite badly - or at least get blamed for it. For now you may be quite secure, but they'll be watching and learning every little bit they can, they'll grow older and bolder/stupider, and given a chanve they will think they know everything, try something, and you end up in a world of hurt.

Don't get me wrong, I really want you to succeed with your plan and I want to see how it's done in the end, but back up everything, and back it up well. External drives in another location (in case said kids screw up and try to recover screwing up the backup), and when you can manage to give the brats their own machine so yours survives :)

Good luck with it :)

1
0

Re: Have to say thanks..

Not having room for a second box is really not an argument....

The linix box will probably be a bit bigger with the stuff he wants to do, but a Windows box fits everywhere.

And with a remote connect to the Linux box it does not matter where it is, could be in the cellar or in a cloud.

0
0
Anonymous Coward

Sounds to me like Robolinux may be your answer

I don't know much about http://robolinux.org/, but I have installed it with the view of testing it out.

0
0

Solution to the Space problems

I Think you should opt for the two machine solution.

Having kids have access to a production machine is suboptimal (to say the least).

Regarding your no-space-for-a-second-machine-problem: Just strap it under the ceiling. Two U or Z shaped brackets should do the trick.

The way you don't need to worry. Your kids can toy with your windows machine. Your linux-machine is safe and sound out of their reach.

1
0
Bronze badge

KVM is your best option, if you insist on your requirements

Whilst I'd suggest Linux on Windows is your best option overall, if you insist on your requirements KVM is the best option.

Xen is a great piece of software, but in the region of VGA passthrough it is decidedly inferior to KVM. Yl not get any support if it does not work.

Xen supports only Quadro devices for reliable passthrough. KVM supports AMD and Intel, but you will need a very recent Linux kernel (3.12+) and patches, plus a recent, patched Qemu. Google 'vfio VGA reset'

Be very careful with your hardware choices and read around the subject first. Also, read the motherboard manual cover to cover before purchase. My motherboard, for instance, supports graphics cards at 4x PCIe speed in only one slot - all others are limited to 1x by the chipset.

Also be careful of USB passthrough. In Linux/Windows it mostly works. In BSD it does not - you have to do a VTd passthrough rather than a single device passthrough in Qemu. You cannot usually pass through a single port due to iommu groups. USB works through allocating a pool of resources to a certain number of ports. Again, on my motherboard I have five USB ports. These can be passed through in two iommu groups - so 2/3 or similar? Wrong! 1/4, or 4/1!

I'm doing this because I'm enjoying fiddling with low level Linux, virtualisation and I'm stubborn. It's still leading edge stuff and you can expect to encounter pain. Be very familiar with iommu groups, PCIe bridges, FLR, VT-d and if your cards support various types of reset before you buy any hardware.

1
0
Silver badge

Re: KVM is your best option, if you insist on your requirements

Many thanks! Kernel 3.12 is what I have in mind. Could start with Fedora 20 (for initial setup/learning) and later migrate to RHEL7 when it's ready (and my setup is ready too).

0
0
Bronze badge

It does work, incidentally!

Just beware you'll have issues. On Windows Catalyst must be installed manually by selecting the driver, then installing the CCC MSI. Running the install always results in a blue screen.

It is fast enough to run games. You'll have to fiddle to get the best disk performance - use the virtio drivers. If anyone is trying to run older OS, be aware that KVM/Qemu creates a VM which is quite similar to a Q35 chipset, but with differences. With ancient OS you may need to use the Qemu 'pc' architecture (440LX). It may also be necessary to use a CPU type of qemu64 or qemu32 in some cases rather than 'host' or enabling KVM

Remember that the VFIO or pcistub driver is separate from KVM. Passthrough works without it. KVM only provides acceleration, which is usually (but not always) faster.

The virtual PC that KVM, Qemu and Xen create is similar to a real PC but it is not the same. OS/2, for instance, does Weird Shit(TM) on install (to be precise, non mainstream OS may tickle VMs in a way that's entirely valid on real hardware but freaks out the VM).

If I was doing this professionally, I'd use Xen and stick to a released version ideally without VGA passthrough. Xenserver is now free and a nice piece of kit.

What I really should have done is to buy a dual Xeon system, with a quadro and run Xen. What can I say, I'm waiting for Haswell-E before upgrading and spending lots of money. In the meantime I'm running an unusual Core2Quad system with a 6950 (pre Nehalem VTd works, but has no interrupt remapping).

1
0

Avoid btrfs

It's not really ready yet. I tried it out for a while but went back to ext4 after and unrecoverable filesysten corruption (and there's still no fsck tool).

1
0

No objective

If you aren't clear about what it is to do how can you make a choice?

It seems to be about trying this or trying that.

0
0

Maybe a simpler solution,

If you have limited disk space, but are happy to spend on hardware, then the solutions is to have a box under the desk running a headless Linux configuration and then replace the keyboard, monitor and mouse with a laptop with windows running on it, and make sure that the laptop is good enough to run your games. If desk space is at too much of a premium move the laptop to somewhere else in the house with more space?

always go with as simple solution as possible, as if there is a problem with hardware you could loose both systems!

0
0

By the way PCI passthrough works fine with ESXi. I'm using ESXi as my host machine, and passing through the Video Card to a Windows VM. Setup the VM to auto start when ESXi starts up, so as soon as it loads it automatically starts the VM and on screen you see the windows VM. Also passing through USB ports to plug in keyboard, mouse, and an external hard drive. All has been working perfectly for the last year or so.

1
0

Cloud?

How about build yourself a small gaming rig. Mid range CPU and a fast graphics card.

Get yourself an account with your Cloud vendor of choice with a nice Linux build on there.

0
0
Anonymous Coward

2 Box solution

A late entry but I'd support the two box solution. You say you don't have space for two boxes, you could easily fix a bracket under the back of your desk to mount a netbook running ubuntu server, it'll just sit there out of the way. Even wall mount it, it's only the space of a book. I've been running such a setup for quite few years. Your family get a full blown windows machine and you get a decent linux server. Quite honestly I don't understand why you would even consider all the faffing about trying to do this with one box.

0
0

Space

If space for a 2-box solution seems problematic, how about something that will hang on the VESA mount on the back of the monitor:

http://www.fit-pc.com/web/products/mintbox/

http://www.solid-run.com/

etc etc etc

1
0
Silver badge

Re: Space

The size is the problem here. I want "non compromise" solution where Linux has lots of power (two socket motherboard size E-ATX) for serious stuff like building gcc very often. For Windows I also want "non compromise" solution with strong (and large by necessity) GPU card and some interesting peripherals. This would normally require two large boxes, for which I do not have space. But, I do not need much processing power for Windows and I do not need any interesting peripherals, not even a GPU, for Linux. Ideally Linux will run headless and will provide network services for Windows guest (mostly sshd and filesystem).

These system are meant to complement each other, so why not use the parts in the same way and actually put them in one box? If this works, I will also gain something no two-box solution can do: flexibility to move resources between systems as I see fit, simply by configuration tweaks. And there is convincing evidence that such systems have been built and are known to work, also in this thread.

0
0

Re: Space

"Building gcc very often" is a batch job, not something to drive the spec of a box. Even so, there are plenty of single cores that will be quite respectable for that kind of workload. Even some older hardware (like my Hex core) would be respectable for that sort of thing.

0
0
Silver badge

Re: Space

"Building gcc very often" is a batch job

For me, making a change in a complex program with very little type safety in its design is a continuous process of making changes, building it and then running tests. And waiting, while build and tests are running.

If you have better workflow for making gcc changes, I'd love to learn about it.

0
0
Silver badge

Re: Space

>If you have better workflow for making gcc changes

If this is your main requirement then as this is a batch job then it would seem to be an ideal task to off-loaded to the cloud ie. off-site...

Whilst this might at first examination seem to be expensive, remember the size of system you are intending to build won't be cheap and that is before we consider the heating and ventilation requirements and the noise of fans.

Personally, if you have space for a tower system that can support 8 HDD's plus all the other stuff, you've got space for a small cluster of blades, which might be a better solution to your family's computing needs.

0
0

Specifics

When you say you want to be able to play games on the Windows side of things, is there any chance you could give an example of the sorts of games being played, and how well (graphically,) do you want to run them?

Also how big of a space have you got for the computer? If you do decide to go down the two computer route, how about this case? http://www.mountainmods.com/u2-ufo-duality-mirror-black-powder-coat-solid-top-p-390.html

Would running Windows as the host with VMware being able to allocate physical memory and processors negate the issue you have with Windows taking resources away from the Linux install?

0
0

Are you looking at the problem the right way?

Correct me if I'm wrong, (I know I probably am ;o)) but I have been intrigued by this discussion since I first read it 12 hours ago.

What you basically want is:

1. A PC which is capable of running some windows software and allowing the OS to see "decent hardware".

2. You normally prefer to use Linux for a working platform but are happy to access it via SSL.

3. You have a serious space limitation which apparently precludes multiple computers.

4. Your budget for a perfect solution is potentially quite high.

Looking at the replies you've received there area lot of brighter people than me out there with good ideas about how you can do what you want with the technology available, but many are basically saying "Virtualisation isn't there yet". Also some of them are really saying - you need 2 computers.

I know this sounds strange but have you considered putting 2 complete computers in the same case? Even building them into your desk like the Power Desk concept from the early 90s might work.

Think about it, get a couple of cheap cases cut them up and fit them into the space available. Build computers in those cases, including appropriate hardware, clad exterior in metal, or even wood, paying particular attention to really good airflow. Get a KVM for those rare times when you actually want to log onto the Linux box directly.

Screw it to the desk to make it harder to nick, problem sorted.

Or am I talking b#@*@~ks?

0
0
Silver badge

Re: Are you looking at the problem the right way?

I think you captured my requirements almost perfectly and thank you for this summary. There is one more I assumed is implied (from subject, perhaps?). I do not quite believe that "virtualisation isn't there yet" unless my own experience tells me so. So, the implied requirement is : make the virtualisation do the work. I will share here later if it worked (or it didn't).

PS Really, one average width tower case is as much as I can fit in here. And perhaps some tiny NAS or microATX in the corner (but this goes against Linux having CPU power I want it to have).

0
0
Paris Hilton

Re: Are you looking at the problem the right way?

>one average width tower case is as much as I can fit in here

The mind boggles, do you type standing up? Have you considered buying a bigger place, renting a lock up? I'm imagining something out of Extreme Hoarders here, or the four Yorkshirmen. Ee you wer lucky, ah used to dream a typing standing up. Well, when ah say standing up it wer really...Where do you put your coffee mug? Can you hang a second box out of a window? Someone suggested hanging one from the ceiling, dont' tell us that space is all taken as well.

Seriously, do you have a printer? If so and it's not wireless and already in another room get a wireless one and put it somewhere else. What about two boxes side by side with a bit of audio insulation on top and then the printer.

And I can't believe I've used the Paris icon but it's as close to being puzzled that there is.

0
0
Silver badge

Re: Are you looking at the problem the right way?

If I were lone wolf then I would just put a server anywhere. As the things are, I must consider my wife and children. And yes, moving out is definitely part of a plan, but if you haven't noticed property market is behaving rather strangely, especially in London. So, this will take some more time and preparation. Printer is on a stack of drawers which sits on top of a desk next to 30" monitor, under which is large document shredder, subwoofer, my legs, spare toners and lots of cables. YES IT IS F*G CROWDED HERE. You have to come and see what builders call "flat" in this part of the world.

0
0
Silver badge

Re: Are you looking at the problem the right way?

>I know this sounds strange but have you considered putting 2 complete computers in the same case

This was an option I've been considering. Back in the 80's there was a UK company that sold PC motherboards that were expansion cards - they sat on the EISA/ISA bus with some software running on the motherboard to co-ordinate disk and network access. Obviously each PC required a keyboard, mouse and monitor (not forgetting licensed software).

Looking around the web there are companies that offer multi-seat solutions that provide additional hardware so each user has a dedicated graphics adaptor but share the host motherboard and OS (eg. Buddy B-680 Premium/Lite, NComputing, SoftXpand.

Obviously with Windows MultiPoint Server, MS also have an offering in this space. This planning guide might be useful: https://www.microsoft.com/en-us/download/details.aspx?id=18482

But I've not been able to find any recent products, although perhaps someone sells a small blade enclosure into which you could slot server blades...

0
0

There is an alternative.

If you had an AIO- All In One (built into the screen, so no more space required) computer running windows, and connected via network (wireless if desired) to the Linux computer to run ssh in a window from the windows box. This way, you would have your Linux stack running natively and your windows machine running natively. If required, you can connect the Linux video via HDMI to your AIO monitor and use ur wiress keyboard and mouse. However, it does still seem the long way home (more expensive).

Again, the specifications seem arbitrary. If you could state specific objectives (compile whole linux distro and build into package over night or in 4 hours.

The more I read this thread the OP is hell bent on having linux run as close to bare metal as possible. This solution achieves that desire. What I would do and I agree with most posts saying run Windows (2000 - 8.1-depending on task) and then Linux in a VM such as Virtual Box. That is the easiest and most usable route. I like having an XP VM just in case. Since it doesn't get any more updates, it is very stable for private a network.

The reason so many advocate this solution is that windows graphics run best on a windows machine. The VM video interface just works better for that combination with windows as the host. Most video cards/ devices are designed to run DX* (windows API). Using DX* to emulate OpenGL is easier as more info is available. Since the GPL conflicts with IP ownership to most graphics companies, they are not real keen on supplying chip level commands/access/ documentation. Without access to the hardware documentation, Linux drivers have trouble supporting the windows DX* graphic API. Yes I heard Nvidia say they are going to play nicer. Wake me up when you see a finished combination. Yes they support Open GL, which is great.

0
0

You've got two mutually conflicting issues. By far the simplest way to solve it is to have two machines. Bit of lateral thinking.. literally. Bolt the linux box to a wall shelf, side on, out of the way.

0
0

http://www.overclock.net/t/1205216/guide-create-a-gaming-virtual-machine

Personally,CPU wise the Core i7 3960X seems best [Supports VT-d (C2 STEPPING ONLY)], which is probs what i'd go for.

6 cores/ 12 threads and 3.3/3.9ghz{boost}

whatever you decide, Sandy Bridge-E/EN or EP is the way to go imo, If you opt for Ivy Bridge read up on the overclocking problems associated with the cheap TIM paste they used!

0
0
Silver badge

Update

So, the hardware is ordered.

All should work well with VGA passthrough and even if virtualisation turns out to be too difficult, it's going to be some nice Linux server (and a GPU for my old PC)

* SuperMicro X9DA7

* 2x Xeon E5-2630V2

* Kingston DDR3 PC-1600, Registered ECC

* Sapphire Tri-X R9 290X

I guess, if I really have to make this Linux a separate server; I can always put in into 2U rack case and slide under a bed ;D

0
0

I have no experience with xen, but at least the kvm information says that while device passthrough is supported, video card passthrough is NOT. A few people have managed to get it to work with some patching work done.

I do see some documentation on it having been done successfully with xen however.

I agree with other people on BTRFS. The developers say it isn't ready for production use. Of course if your machine is just to run games and do some hobby work, then that might be good enough.

I too would avoid AMD graphics cards. I also personally am a Debian fan, and have no interest or appreciation for the commercial linux distributions.

0
1
Anonymous Coward

what solution have you chosen in the end?

Bronek Kozicki - I have registered on this forum only to ask this: I am very curious, what solution have you chosen in the end? googling 'xen vs kvm for windows' gives this thread the first result! :)

Back in 2007 when I bought my current PC, having your exact identical needs (except that I don't patch gcc but run some programs on linux - proxy, firewall, other vms - and want a single box for energy costs) I wanted pci/vga pasthrough but it was in infant stage.

Now that pci passthrough seems more mature and I'd like to try it again. From the thread I've collected most opinions pro-Xen-based solution (xendesktop, xenserver, xen hdx, oracle jeos, centos, debian), one esxi and one kvm. I'm quite interested in the final choice.

0
1
Stop

All I hear

All I hear from this comments thread is:

Linux - Linux - Linux....

The differance between Linux & BSD?

Linux is what happens when you get a load of PC hackers that want to port Unix to a PC.

BSD is what happens when you get a load of Unix hackers that want to port Unix to a PC.

You do the math!

1
2
Silver badge

lots of time have passed

.... for those who wonder what I've chosen in the end : I've been successfully running for nearly a year following stack:

  • Arch Linux running as a headless hypervisor, where I configure, build & sign my own packages for software stack mentioned below, when and as I feel like upgrading them
  • kernel build closely following current version from www.kernel.org , only little behind for sake of ZFS on Linux (currently 4.0.9, waiting for ZOL release 0.6.5 before upgrade to 4.1)
  • ZFS on Linux, current release + occasionally a patch or two (currently 0.6.4.2 with single patch from pull request 3344)
  • kvm with vfio GPU passthrough, AMD GPUs passed to Windows 7 (two GPUs, two Windows 7 VMs, plus some more VMs without GPU, all have qemu-agent installed). Linux console on serial port only (and of course ssh access). Linux radeon drivers are blacklisted
  • qemu currently version 2.3.0 will upgrade soon to 2.4.0 (or perhaps 2.3.1, if I do not like it)
  • libvirt with libvirt-guests to start and shutdown the VMs at the right moments. Patched libvirt-guests a little to use --mode=agent when shutting guests down
  • VMs disks are setup as ZVOLs on ZFS, all VMs are snapshotted every now and then (alongside with user files, below)
  • A filesystem on the same ZFS pool is shared under Samba as fileserver for user files
  • Also using ZFS for Linux root, home and build directory (see top point)
  • Samba 4.2.3 running on a separate pocket-size ("next unit of computing", as Intel calls this format) PC as an AD controller, to which both Samba running on host and Windows 7 guests are attached as members. A second AD controller (also Samba 4.2.3) is running under a VM, just in case
  • zfs send | zfs receive, run occasionally to separate ZFS pool as backup (offline when not doing backup)
There are small quirks, and one has to be careful with upgrades, but overall it works pretty well.

3
1
Silver badge

Re: lots of time have passed

Bronek, thanks for reporting back in such detail.

Don't know why you got a down vote other than to suspect you probably didn't go with some fanatic's 'ideal' solution.

0
0

Right, it seems I failed to explain what I'm after. Added few more posts in the thread with explanations, hope it will start to make sense now.

0
0

Neither for your use case. Things that you can do that will make sense - Dual boot or run a hypervisor on a separate box ( which is what I do ) . I bought an Intel NUC PC which fits in the palm of your hand, and I installed VMWare ESXI (free version) on it, though you can install another hypervisor. This gives you a almost invisible box to run several VM's on at a pretty low cost. It's also very flexible so that you can erase the SSD and load windows/linux on it if you no longer need VM's.

0
0

I was thinking of a couple of boxes, Windows games machine and session client upstairs and Linux big box in the basement as OP likes his SSH and remote access.

0
0

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Forums

Biting the hand that feeds IT © 1998–2017