Register readers have told us (pdf) that the virtualisation of significant portions of your x86 server estates are major project areas in which you plan to invest both money and, much more importantly, your time. This is all well and good, but as everyone in IT understands, x86 servers are not the only platforms in town. So …
Not comparable really
Firstly Unix systems never experienced the server sprawl which engulfed x86 shops a few years ago so there aren't the same drivers to virtualise and reign this back in. Secondly a lot of Unix systems are misison critical back end systems. Virtualising these just adds an extra layer to manage and increases the chance of outage.
In addition Unix platforms have always been more receptive to running multiple apps on the same box in the same OS removing a lot of the requirement to virtualise anyway.
We have a fairly large Windows visualized farm on VMWare. A few hundred servers we've migrated over or built on VMWare over the last 3 years or so.
However, in the last 12 months, we've gone from having 2 IFLs to play with, to more than 20, running well in excess of 400 Suse Linux servers on the Mainframe. We expect to at least double that this year, and ANY server that CAN run on Suse is being migrated to Suse.
Windows needs 1 license per virtual server in most cases (OS excluded), where most IBM license is by the processor underneath. A P6+ 4 core processor is 480PVUs worth of license. An IFL is 120PVUs, and can run 4-6 times the number of guest servers.
You can now get a mini-mainframe (Buseinss class z10), with 2 starter IFLs for under $200,000. That can run 40-60 servers easy. The same capacity on a 570 or 590 platform will cost about the same, if not a bit less (we pay about $80 per fully loaded VMWare chassis), and it will also run 40-60 guests. However, licensing costs between them, for things like WebSphere, are about 8 times higher on the VMWare side, and on top of that you need the VMWare licenses themselves, plus the OS. A z10 BC can hold a whole lot of IFLs, a few ZIIPs and ZAAPs too. If you;re visualizing a few hundred web and Java servers, DB2 or PGSQL/MySQL databases, WebSphere or MQ, then z/VM linux on a z10 IFL is by FAR the cheapest way to go. We've also seen cost reductions in Oracle pricing, VRU services, and more.
The single binary image model is far superior to image independent VM guests (patch one system to patch a hundred). Operational issues are lessened by not running Windows, no one writes viruses compiled for OS390X hardware (since you'd actually need access to a mainframe or Hercules VM server to do it), and LPAR replication between 2 chassis is easy (and you don't have to license the second chassis).
If you look at IBM list pricing for the IFLs, ZIIPs, and Chassis, it sounds like a bad deal, but consider, NO ONE pays list price for IBM mainframe hardware (we pay about half).
Is this a solution for an SME, with 30-50 servers? no, but then honestly nether is a true VMWare infrastructure...
What is the end game (of sorts) for compute side architectures ?
In 20 years will Itanium,power,sparc still exist ? x86 will.
(If Nehalem-NextNext is 1/3 the cost and 98%+ the perf of Power7, is there really a decision to be made, other than how quickly we can port to a Unix based x86 flavor)
(Poor AMD, darn it, w/o them, we'd have either Itanium or a Pentium 4 at 20Ghz that didn't get any real work done, and be paying out the waazo for it. I hope AMDs' new arch keeps them competitive.)
Non x86 Unix will be toast.
Software Mainframes (think vmotion and vmware FT) will eventually displace /replace the real thing.
Insanely Beautiful Machines
Virtualization on non-x86? Easy - AIX/Linux on Power platforms. I'm not a big fan of IBM per se (especially as I work for one of their closest competitors - ahem) but the virtualization features they've got on the Power boxes (APV/PVM) are really powerful and easy to use. So much so, that when you come to look at Solaris/HPUX/etc you wonder they do it the hard way...
I guess that's the upside of having control of the hardware and software like IBM do - okay, it's more expensive, but you can really layer on the useful features. Take a look at the PVM Concepts "Redbook"/manual and you'll see what I mean.
Okay, Power virtualization arguably isn't as wide-spread as it is for x86, but the AIX stuff is easily good enough to be usable for slice-n-dice of big server frames into multiple production machines, even high-load, production-critical stuff with clustering etc. So you can then get the advantages of the bigger boxes - less floorspace/server, better cost/server, more resiliancy features, etc.
And if it's not production, but dev/test then being able to fast provision a shed-load of small servers (I've seen a half a dozen images running happily on a little 2-core p520 'pizza box') on a resource-limited box is a god-send.
I initially wasn't that impressed with the virtualization, but once I'd had a good while in 'hands on' I became a convert - so much so, that my own "dev" Linux (x86) box at home now runs VMware Server with a couple of VM's on it. It's definitely the way to go.
Now what would be useful is some cross-platform architecture virtualisation, or would that be emulation.
I have the need to run some apps produced by a certain company that is big in networking on real Sun SPARC hardware. While the rest of my estate is x86 orientated, I need a couple of SPARC boxes to jog alongside. For development and prototyping purposes; I believe if the likes of VMware could incorporate SPARC emulation then I could get rid of my two ageing SunFire V120's running Solaris10 and that would allow me greater flexibility when addressing client needs, while keeping my costs at the right level.
That said, for production purposes though I guess real SPARC-tin would still be needed to hit those SUNny performance levels. Perhaps after another 12-24 months that could be addressed too.
I absolutely love how they include "Not answered" and "don't use this platform" in the graphs to make it look as if only a tiny fraction of unix and other systems use virtualization.
I need to remember that trick.
What's with the gradients?
The graphs are difficult to read. Simplify, simplify.
PowerVM is firmware for IBM Power 5, 6 or 7 systems. It comes in three levels, the highest includes Dynamic LPARs plus an x86 software emulator for running Intel-Linux on Power. IBM's UNIX also has a Solaris-like 'software hypervisor' that permits booting multiple AIX/Linux images under AIX. Those AIX/Linux images have special code to support drives/NICs/queues/etc. in this environment.
For performance, PowerVM is well better than VMWare. You lose about 1% of each non-dedicated core when using PowerVM (the firmware redispatches such cores 100 times a second). There is a VMotion-like function called LPM (Live Partition Mobility) that permits movement of any-size partition from one hardware platform to another, without users getting signed-out. Power-architecture also permits Concurrent Microcode changes, so system/HBA/whatever can be kept up-to-date without downtime.
Cool stuff, but Power systems start at $8K and go up into the Millions. Will they compete-forever with x86? Geez, if I knew that, I wouldn't be wasting my time reading tech-web.
Computing is always segmented.
It is a golden rule in marketing , that is " you do not have a single product that suits the entire market"
One may argue that the computing industry is converging , but It will never consolidate into single market segment, since the market is diverged, you may have, telecommunication, financial ,retailing, public services or defense industries, all demanding different attributes of their computing environments. Just like you cannot compare X86 with Unix or with Mainframe.
IBM also offers a special partition-only cut-down AIX that provides services to other AIX/Linux partitions within the same hardware-machine. This software (called VIO) enables SCSI or FC sharing, partitioning of hard-drives (so multiple systems can use a 300 GB drive for ROOTVG, for example) and other hardware sharing. VIO certainly takes MUCH more than 1% overhead, depending on how much hardware infrastructure can assist VIO. VIO gives some of the 'software hypervisor' style magic without being a hypervisor at all. Some functions that are VIO-only on Power 5 have gone to firmware assists on Power 6 (don't yet know if more got there on Power 7). Shared Ethernet Adapters used to be high-overhead in software-only Power 5, but don't involve VIO at all in Power 6 (where did the overhead go?, it didn't ALL go away). Some think that VIO-itself is a stopgap until IBM can push as-much-as-it-can into the hardware virtualization.
Tony, disagree with your observation about UNIX vendors and viritualiztion
I've been a Sun, Solaris, SPARC guy for close to 20 years and watched Sun's (head bowed ... may it's memory rest in peace ...) virtualization efforts move forward. Being an engineer I've also watched with interest IBM virtualization efforts on the AIX/Power side of things. Both companies have made concerted efforts over to the years to promote their virtualization capabilities within AIX and Solaris. Both companies IMHO have done a pretty good job in their own ways. The fact that VMware is the 900-million pound monkey in the room doesn't mean Sun, IBM, (ok and), HP done a poor job prompting their virtualization capabilities ... its mean that VMware has done a better job.
- Fee fie Firefox: Mozilla's lawyers probe Dell over browser install charge
- Did Apple's iOS make you physically SICK? Try swallowing version 7.1
- Neil Young touts MP3 player that's no Piece of Crap
- Pics Indestructible Death Stars blow up planets using glowing KILL RAY
- Review Distro diaspora: Four flavours of Ubuntu unpacked