A brief look at virtual machines for home use resulted in several requests for system specifications and configuration details. It seems some of you would like to take a go at replicating my setup. The hardware is simple. The motherboard is an ASUS P8H67-I Deluxe, with an Intel Core i5 2500 CPU, two 8GB Corsair SODIMMS and an …
Nod here for Proliants as wellGot three ML115s with another one running openfiler as shared storage. Works an absolute dream running ESXi. Cheap as chips too - under a grand for the lot (though the array card is an absolute pig due to tiny amount of cache) Why anyone would ever run Windoze virtualisation at home when XenServer and ESXi are free and infinite orders of magnitude better is beyond me.
RemoteFXAnyone else running RemoteFX? How are you finding your performance? I've got well specced thin client hooked into a vdi instance of Win 7 Ultimate over gigabit and the RemoteFX performance sucks ass. The display drivers keep crashing and recovering and there's major lag on simple things like dragging a window. Launching Zune drags everything almost to a halt, and Silverlight webpages have a similar effect. This is on an otherwise clean install... Is this normal, or have I screwed something up? I did cut corners on the Hyper-V box processor, only an Athlon II x2 @ 3Ghz, but it's otherwise mostly idle.
Have a look at maybe unlocking any extra cores on that: I got an X2 Black Edition, and found out that with a BIOS tweak I could unlock the two disabled cores (X2s are X4s nobbled for economies of scale). One of them kept crashing the machine, but one was fine, so I've now got an Athlon X3! 8-)
Complex!Grab any old intel machine, install whatever OS you want on it, then install VirtualBox. Job done.
How does that work as a shove in a cupboard solution? Ideally you want something that you can administer remotely and know will restart without throwing a single prompt or request.
It works fine in a cupboard. VirtualBox is happy to run VMs headless and you can administer them with VBoxManage.
Absolutely! Homebuild i5 with 8 Gb RAM running Linux Ubuntu. It hosts 2 VMs (Virtualbox) running legal instances of Win xp. One for Win only Corel draw and Turbocad, and the other for old Win games. Fast to boot and fast running. A third one on the way to have a good look at Linux Mint. What more could you want?
My point is more, depending on the any old OS installed it may not necessary come up cleanly without prompting. I have certainly had issues with a linux variant dropping its remote login permissions after an update which meant I needed to attach it back to a screen to re-enable it. Therefore it is perhaps better to use something like ESXi (or equivalent) which expects to be remotely administered.
Nonsense, ad-infinitum...I installed ESXi onto the hardware, where I run Windows 2008 R2 with HyperV, in which I have a Windows 7 desktop VM running with XP Mode enabled so I can run XP VM, into which I've installed........
Yes......but can it play Crysis?
NopeIntel IGP can't play anything. Ever.
ReaderApparently, you don't read Reg articles: http://www.theregister.co.uk/2012/01/06/remotefx/
Apparently, you don't read comments on Reg articles. I don't think you flu understood what "Does out play Crisis?" actually means
Machine countWith a large virtual landscape comes a large system administration role. Many companies have a smaller machine count that you, and employ staff full time to keep them all buzzing. Stil impressive though.
Machine countHousehold machine list: 2x HTC Desire 1x Galaxy S II 1x Moto Droid. 1x Galaxy Tab Classic 2x Transformers 2x Samsung NF210 1x Alienware MX18 1x Virtual Server 1x Ridiculous over-the-top Core 2 Quad gaming desktop 1x Holy [bleep] Dual CPU 8-core, 64GB RAM Server --> Gaming Rig Of Doom 2x NAS 2x Wifi Routers 3x Consoles As much as that may sound like a bit of a list...it covers 3 people's worth of stuff. Considering that I probably maintain upwards of 1000 endpoints and 5000 servers in the field - about 200e/50s of them with the help of another sysadmin - the home setup is pretty small potatoes. Still, it's nice that it all "just works."
The Old Days
When I /worked/ with computers, I wouldn't have one in the house. At all. Ever!
Now I spend twice as many hours in front of one as I did before I retired.
Where vitalisation is concerned, I begin with the attitude that a good, multi-tasking operating system will be able to do it all any way, without adding the further overhead of virtual machines so, for me, the ideal machine would have zero virtual babies.
But life is not ideal. My resignation from all things complete requires, at least, a monthly exception, because there is something about my Ubuntu/Firefox/Java combination that doesn't work with my online banking site. This calls for the XP Virtual machine. Trial versions of Linux also call for virtual machines. Virtual machines are superb for sandboxing or running different versions of whatever OS needs to be given a whirl. All of those, however, get run from time to time, as and when needed. None of them run concurrently. Having "grown up" with the power of Unix, I /expect/ that my Linux machine will do all that is required of it in terms of file/print/etc serving, and it does (but hey, the home network is, err... a whole two machines!). It runs samba for one other machine to file server (Just as my RS/6000 ran Samba for 40 machines to share --- as well as an accounting package and two database engines).
My 'downside' of virtual machines (VM Virtualbox), at least when the host is Ubuntu and the client is XP --- poor file access performance; access for sound is to a virtualised device, and the quality is poor; no access to PCI devices (they say it is coming)... is all I can think of just now. Otherwise: much quicker to boot virtual XP than reboot from the actual WInXP partition, and pretty-much everything, in so far as this can be said of Windows, does /just work/.
Sys Admin overhead
Even a device as humble as a Rockboxed mp3 player requires some sysadmin overhead. And just a handful of servers can fill your time if peopled by a busy and demanding user base. This is perhaps less so in the the Windows world, where things are more off-the-shelf I guess.
The sysadmin burden generated by a server landscape depends on many things - if it is homogeneous, if it is non-production, that helps. But I have yet to see a sizeable landscape, real or virtual, that does real work 24x7, without generating a maintenance overhead. Even "lab" systems that nobody really cares about need some love.
Virtualising systems can reduce the overhead in some ways, but increases it in others. Sure you can bump the CPU count with a single mouse click. But you end up with an awful lot of servers depending on the same kit, and sometimes on each other. Eg Cloned VMs often have an enduring dependancy on the source object, as can be the case with Solaris LDOMs cloned with zfs - they all depend on one snapshot.
AMD?Interesting article as I was just speccing up a budget system to use for this too. I can get a new 8-core AMD FX, ASUS mATX motherboard, and 2x8GB Kingston value RAM for around £400-£450. The other bits we can cannibalise from existing kit. Should make for a nice VM host!
I want to cheer for AMD
...but for an always on machine I'd go with the I5 2500 Trevor is using. It is much more power efficient for a use case like this and if you're in the States Microcenter has the unlocked version on sale for $179.
Personally, I'm about to build a file server / HTPC with an E350. I like to see how cheap and efficient of a solution I can put together for a given problem, and am looking for that I5 2500's successor (and new AMD GPU) when I finally upgrade my workstation later this year.
PersonallyI use a Gigabyte board, (I can't remember which) it's got an AMD 64 bit chip of some sort and has been ticking along for about three years. The PSU is a similar 80+ high efficiency jobbie. The disks are four Western Digital Green (2x500GB, 1x1TB and a 1.5TB for backup VM only). The software is ESXi 4.1, which was frigged slightly to see the disk controller and actually runs from a memory stick, which should make upgrading the hypervisor a bit easier. My main peeve and word of caution is that after about three years my motherboard is coming up for replacement. The sole reason for the replacement is that it can't take more than 8GB of RAM and I really need to upgrade to about 16GB. This would be my main caution for people looking at designing this sort of system - Make sure it can take about four times as much RAM as you think you'll ever need, the processor really isn't an issue for most installations, but the RAM isn't really negotiable for modern OSes.
SambaAnyone using this sort of setup to host a Samba-based domain / DFS ? What's performance like?
@Audrey S. ThackerayIdentical specs, runs 2x PVMs, and a CentOS LDAP DC + Samba DFS for 10 people. (Front ends about 25TB of storage.) It's not enough. The RAM requirements for the Samba DFS really should be abotu 16GB on its own. If you upped it to a Micro-ATX board with 32GB of DDR3, you'd be gold. The CPU/network isn't the limitation. The two SODIMM slots are.
Doesn't the I5 max out at 16 GB of RAM?
I thought you had to go I7 to get to 32.
Could be wrong of course but I could have sworn I heard that on Tom's or Anandtech.
This is all making me enviousFreeBSD doesn't really handle many VM technologies (well, not as host), and I went with FreeBSD for ZFS. My home server has 12 x 1.5TB drives in it for ~16TB of raidz storage, 6 off the onboard SATA ports, 6 from 3 x 2 port PCI-e. All built with commodity/consumer parts, excluding the disks I think I spent about £450, largely on the case. I record a lot of TV content, and archive it - I don't like deleting media, I have an archive of about 5 years of MOTD. The jump to HD leads to disk disappearing at an ever increasing rate, I think I will need another 6 x 3TB disk expansion soon to add another 15TB, although prices need to come down a bit before I'll bite! I do export an iscsi target on which I have installed Windows 7, which allows my desktop to boot and run over the network, and then backup/archive/snapshot it using standard ZFS tools, which is pretty nifty.
You're like the digital equivalent of that "Hoarders: Buried Alive" show. Of course, I have no room to talk either - I have more than I have time to watch already, yet the collection keeps growing...
Cheers : )
Hell, I have almost 70,000 digital texts (books for the most part) that I've accumulated over the last 30 years. That would be a few forests, methinks!
SSD?This is a key element, right? On our whitebox ESXi server (Vostro 460, 8GB, i5-2500) all is good until multiple VMs access the same HDD, say when applying updates, then it can slow right down to a crawl.
You don't need big hardware
I have 2 Linux VM's running on an old laptop with 4gig RAM under VirtualBox. And because the laptop has a high-end Nvidia card, I can also run XBMC on it.
I paid $110 for the laptop, plus another ~$40 for 2gig RAM. Works great, even using Windows 7 as the host OS. I'm not entirely sure why you need to spend almost a grand on hardware, but to each his own.
And if you are using enterprise hardware, watch your electricity costs. I found that using a 3u Proliant server was costing $50/month on electricity alone - switch that to a NAS + a low powered Atom machine was worthwhile.
250W 80+ PSU
I saw one on NewEgg today - it wasn't the standard ATX form factor but it was a desktop power supply. I wanted to go Pico but the spinup draw for multiple 3.5" hard drives killed that idea, although I will be back for my next HTPC. There is a 400 watt 80-Plus Gold for $69 that I have my eye on for my next build.
3 x DL140 G3's, with HP P400 BBWC RAID (1 incomplete) 1 with VMWare ESX, the other will be a linux KVM as yet undecided
1 x DL385 with a P400 internal and a P212 additional BBWC RAID, and quad LAN again with a Linux KVM host when decided
1 x IBM X3550 with Quad LAN, and BBWC, again waiting for the version of Linux KVM to be decided
the VMWare Machine will host 2-3 Windows 2008 R2 machines, currently one is set up bar a few niggles and will have just a few websites of someone living here switched over in the near future, the other machines will be a more capable version of Windows R2, one for building software and one for additional functionality beyond basic web servers when and if needed.
This machine also has 3 Linux VM's currently just looking at Web GUI based KVM Management software.
At the other end of the scale...
I have Windows 98 running in a VMware player VM on her ladyship's WIn7 64 bit laptop so she can run Aldus Pagemaker and her favourite card game... Sadly my copy of SimCity 3000 won't run in a VM.
Simcity 3000 works in virtualbox. No luck getting it to work elsewhere. :(
Win98 in VM!
Care to share how you managed to get Win98 installed? After jumping through all the setup hoops to finally get it installed it still helpfully freezes up for no reason until the VM is restarted. Please do share!
For older games...
For older games, and Win 3.x there is always DOSBox, which is excellent, I have it on PPC mac, linux and Windows, the best bit is that it allows you to run IPX/SPX encapsulated in an IP tunnel, so you can play all those old linkup games.
Windows 98!!! My stepfather whines and cries and shakes his fist in the air because he had to upgrade to XP. I don't know how you're making it work, but well played sir. Well played.
@Alex/Win 98 in VM
'fraid I just did it, there weren't any hoops or anything. It was fairly straightforward. However I didn't manage to get it to run in any significant manner on my XP box, but on the Win7x64 it just worked.
A blocker for running older OSes under VMware vSphere is the lack of IDE drive support. For desktop products like VirtualBox (a personal favourite), VMware Player, or Virtual PC, this isn't a problem.
A few years ago I fill a week between contracts happily building VMs of all the old OSes I still had lying around. I even got Win 3.11 (downloaded from TechNet!), before common sense kicked in...
I don't know if this is strictly applicable here, but way back when I had to remove all but one of my sticks of RAM to get '98 to completely installed let alone to run well. Anything over 512 MB would give it conniption fits. Once I installed the unofficial 98 SE SP Roll-up (it had all the official post '98 SE SP1 patches rolled in as well as some neat hacks/fixes) I could go to the full 1 GB or more. YMMV.
I have several VM's i use, I have a ESXI host on a dell 2850 with 8GB, and then i have a core 2 quad box with 16GB or ram that runs server 2008 R2. I've got virtualbox VM's and esx VM's that i float around the six odd workstations in the house, and yes, don't laugh, but i have a sub 300 dollar compaq netbook powering my home theater system. usb 1 TB drive makes it a lot more usable.
have to say i do love the articles, glad to see i'm not the only nut who like to use VMs to keep their workstations clean
Q re multiple VMs under ESX(i), and disk IO
"multiple VMs access the same HDD, say when applying updates, then it can slow right down to a crawl. "
Something a bit similar observed here at work, but the conclusion was rather surprising.
Reasonable spec Dell server (details irrelevant) running ESX.
With only 1 VM active, an app on the guest gets roughly the disk IO performance (MB/s, IO/s) you'd expect if it were native. With 2 VMs, the disk IO performance seen by the app is halved, ***even if there is only one of the VMs doing disk IO***. Try it with N VMs active (N-1 idling), and the sole VM actually doing disk IO gets around 1/N of the disk IO Megabytes/s and IOs per second you'd expect on the raw hardware.
Is this really expected behaviour for modern enterprise-class IT-department-authorised software?
"Is this really expected behaviour for modern enterprise-class IT-department-authorised software?"
Yes - unless you use enterprise storage