back to article Virtualization software to crush server market

Virtualization software will apparently cripple the low-end server market. Analysts and executives came out this week and declared that x86 server shipments will likely decline as VMware, Microsoft and a host of start-ups push their virtualization wares at speed. This thesis du jour centers on the notion that customers will buy …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    s/Virtualization/Virtualisation/g;

    Good grief - just a couple of weeks ago, El-Reg was boasting of its British origins, and now this Americanised nonsense. In Britain we use "ise", not "ize" - suggest you get yourselves a sed script which scans all articles for such Americanised nonsense before they're published - or better yet, automate converting to 'ise' for .co.uk and 'ize' for .com!

  2. Jason Clery

    1 box

    Thats right, stick everything on 1 box so when the hardware fails you lose it all.

  3. Anonymous Coward
    Anonymous Coward

    Glad to see the Reg talking sense, but ..

    Two major thing that always gets overlooked with virtualisation is -

    1) What happens when the host server crashes? you'll loose ALL your services rather just a few.

    2) What happens when the hardware breaksdown? you are completely f#@*ed.

    And don't give me this "it can be clustered, replicated" crap - its just another layer of complexity, but the time you've paid for the licences, the consultants, the training etc etc etc, you've saved nothing.

    At best virtualisation is mediocre for some non-essential systems, or developers etc - but its complete crap if you want business continuity.

  4. Anonymous Coward
    Anonymous Coward

    "everything on 1 box so when the OS fails you lose it all."

    It's already been pointed out that a one-box answer has hardware "issues". A failure in a non-resilient subsystem means everything dies. Fortunately today's better servers can have lots of hardware resilience: disks, memory, power, fans, network, all can be configured resiliently. Maybe even processors, after a fashion. All well and good, right?

    So what do you do about a failure in the underlying OS (or, heaven forbid, in the underlying virtualisation layer)?

    Good job OSes and virtualization layers are 100% reliable, right?

  5. Vulpes Vulpes

    Stop mucking about. Buy a mainframe.

    Client Server, cleint schmerver. Pah.

    Join the big boys, buy a mainframe.

  6. Joshua Sugarman

    RE: when the OS fails you lose it all.

    Only if you're stupid enough not to do backups?

    VMWare ESX allows for SAN Storage of the VM so if the OS fails on your server thats running them, you bring up another temp one and run the VM's from there. No data loss, just a few mins of downtime...

    And then you'll say "what if the san fails?". Simple... RAID, or whatever other disk backup...

    A perfect solution includes multiple links between 2 SAN's and 2 Servers running virtual machines. once set up, the reduction on hardware and management costs is so great (and electricity which can be tested using PowerRecon), it's definitely a profitable and efficient solution!

    Virtuozzo is another innovative technology which provides software virtualisation rather than hardware virtualisation to directly tackle these issues.

  7. Richard

    Processors and memory

    Virtualisation in the way Xen & VMWare do it isn't magical. If you allocate 10 virtual machines (VMs) to 10 tasks, and each task requires 1 GB of memory, then your server needs > 10 GB of memory. Similarly you're best to allocate a CPU core to each VM to avoid contention. Same applies to disk space. So although you maybe only have a single case and power supply, you still need to buy all the really expensive stuff (CPUs, sticks of RAM, disk drives) in quantity.

    "Containerised" approaches (OpenVZ, Solaris Zones) are a bit more efficient in that they only run one kernel, so they can share CPU, RAM and disk space. However they are never going to be as flexible as full virtualisation (Xen, VMWare) because you can't run multiple operating systems alongside each other, and the separation isn't so strong.

    Rich.

  8. Anonymous Coward
    Anonymous Coward

    It will cut our hardware spend.

    The company I work for has literally 1000's of servers roughly 1 per 15 employees. Average CPU utilisation is around 8% (eight not eighty). Using virtualisation will overcome the reluctance of project teams to share a server and can be expected to bring utilization up to about 40 to 50% which equates to a significant number of servers, a reduction in maintenance costs and even lower electricity bills.

    (And no we don't stick everything on one 1 box as Jason suggested. We will still cluster, we just have to make sure that the VM's that the application servers are on are running on different physical hardware. If 1 box does fail we loose part of a few different clusters never a whole cluster.)

  9. regadpellagru

    quoted for truth !

    As has been written in the article, virtualization won't affect servers market, since HW provisionning is only dictated by funds availability ! IT depts will just stack more apps in the datacenters, and the systems bought will just shift from 1-4U boxes to blade systems with a lot of memory.

    Also, in the consumers market, same thing, will people buy less systems, because the video games only use 1 core, never use bloody SLI, or use less CPU power ?

    Nah, won't happen, people will buy the same newer gear because of Vista using 1Go to run miners !

    IT loath the void, thus will fill it up !

  10. SImon Hobson Bronze badge

    Seems about right to me

    Virtualisation isn't goping to be the answer for everything, but in our racks (and scattered around because we've run out of room) we have a fair number of lightly loaded boxes that really should be due for retirement. Some are short of disk space, but most have space to spare - all take more power than would be needed for an equivalent VM.

    We are looking at putting in a smaller number of larger boxes and running virtual machines - most have fairly lightweight requirements. In fact we'll end up with more VMs than boxes removed as we'll take the opportunity to split functions a bit more instead of stuffing things on whatever box still has space to run it !

    As for reliability, we already have some impressive uptimes, including uptime counters wrapping round at 497 days running on something that even Google would call 'cheap' hardware !

    Just for example ...

    Suppose you have systems with an MTBF of 3 years (or about 1000 days). If you have 10 of them then you can expect a failure (on average) every 100 days - and since many systems are interlinked then each failure will affect multiple services. Consolidate those 10 servers onto one box and you can expect a failure (on average) every 1000 days. I know it's very simplistic, but it means that the difference is between each service having one-plus-something outages and each service having ONE outage in 3 years.

    Also, the total downtime will be less - it normally takes a similar amount of time to fix a broken box regardless of size. So ten failures will take significantly longer to fix than one. Hardware features to reduce downtime (redundant PSUs, raid arrays) will also cost a LOT less (but more than a 10th because each will be bigger) when buying them once than when buying them ten times over. Support hardware (KVM switches, power distribution, network switches) will also be less for less boxes.

    And when it does come time to upgrade a service, you may be able to just allocate it more resources, but if the host is now full then it's a LOT easier to migrate a VM to a new host than it is to move a native OS (all those new drivers for this, that, and all the annoying little bits and pieces that will be different on a new machine 2 years later). Under the right circumstances, Xen (and I assume others) can do this live ! Using this it should even be possible to migrate VMs off a machine, shut it down for upgrades (more/bigger disks, more processors, more memory, whatever), and then migrate the VMs back again - hardware upgrade with no downtime (just a temporary reduction in performance) for services.

  11. Andy Silver badge

    the ise have it

    It may be true that most Brits (including me) prefer to -ise, but the OED insists on -ize. Either is correct.

    "What happens when the host server crashes? you'll loose ALL your services rather just a few." -- Anonymous.

    Crashes like in a car crashing? I can't see any other way that the services will be loosed.

    -A.

  12. Anton Ivanov

    Virtualisation in the Unix world is a manifestation of IT ineptitude

    BSD jails, Solaris containers, mainframe linux migration and even plain old chroot along with making network dependant applications use logical interfaces aliased to the system loopback can easily deliver nearly all of the benefits of Vmware and the like at a fraction of the cost. Once the IT departments start moving applications to one machine as a cost saving it is only a matter of time until someone thinks of doing it properly and in a cost effective manner. So Vmware is actually digging its own grave. The more popular it becomes the more likely it is that people will get funny ideas and compare it to archaic technologies like chroot (or modernised counterparts). Once they realise that it is on average 5-10 times less efficient than using the native tools coming with the OS Vmware is bound to lose.

  13. Frank Goddard

    VM is the Future

    If not why have Intel and Cisco already invested over 400 Million in VMware (VMX)

    Intel went as far as to obtain a seat on the Board of VMware.

    VM GSX software is the server of the future.

  14. Anonymous Coward
    Anonymous Coward

    Proper planning prevents...

    To those of you who knock virtualisation -

    We run 9 (soon to be 12) large quad core servers with a highly resilient SAN. These host around 140 guest servers running various operating systems. We've had a few instances of hardware failure in a host, however by moving the virtual servers between hosts we've had very little down time. Scheduled maintenance is easy - move the guests off the host and flash the BIOS, whatever you want to do, no prob. Redistribute the load when done.

    Its saved us lots of money - its not just the purchasing of hardware that counts - its the running of that hardware - power, air con, etc. I'd guess we've saved close to £500k over the last 12 months.

    In the end it comes down to thorough planning and careful implementation - just throwing in the kit and wanting it 'to just work' is a fools game.

    Proper planning prevents piss poor performance!

  15. Brian Miller

    Virtualize: it's what's good for ya

    The Boeing computer room in building 24 (IIRC, been a while) is about the size of a football field. You enter on the west side, and then start walking to the east side. Lots of empty space there. At the east end of the room are the IBM mainframes, about a couple of dozen of them. That's it. When you have an OS that supports good virtualization, your costs drop dramatically.

    Virtualization is also good for testing. Automation can run in virtual machines, reducing lab space significantly.

  16. Dave

    Proper planning prevents...

    There are some great planning solutions out there for virtualization, PlateSpin's offering are some of the best and also have some DR solutions

  17. Art Jannicelli

    Small Business and the short sited

    The last company I worked for had a farm of about 75 Virtualized Servers. It saved the company a lot of money by allowing us to transition off of older platforms and consolidate rack space and power. As mentioned above because we ran our VM's from our SAN we rarely had any down time even in the event of hardware failure on a host node. For this company it was a great thing.

    However, in small business and/or companies where non-technical management holds the check book and is most concerned with quarterly numbers; VM just doesn't add up to them. In these companies they would rather get a bottom of the top of the line server from Dell or HP that can be upgraded later if needed. As opposed to dropping the large investment required to purchase an ESX box capable hosting 10 VM's plus licensing. Not to mention the need to have failover capability and optimally a SAN. Moving to VM is an enterprise level upgrade.

    Therefore, I would be curious to know what percentage of low end servers are purchased by small business vs. enterprise customers. If anything I think Dell and HP can depend on small business to maintain their demand for low end servers indefinitely. Small Business and the short sited are just not interested in making the upfront investment required to create a solid VM infrastructure.

  18. b shubin

    Fail-over niche

    some interesting reasoning to be had here.

    after reading through all the arguments previously mentioned, it still appears that the bigger IT shops with the bigger budgets will be able to get value from this technology much more easily, if only because of the complexity and administrative overhead required for virtualization fail-over relationships between various multi-core boxes, along with maintaining the higher-end storage and server hardware, day-to-day operations, updates, upgrades and projects.

    those bigger shops may be better served by a mainframe running several virtualized environments (IBM's LPARs come readily to mind). it is possible to host more than 70 Linux/BSD environments on a single, self-diagnosing, multiply redundant, massively parallel host, that can dial your 4-hour-response support vendor automatically when it senses a component about to fail.

    now, consider smaller firms. almost 60% of the US economy is an aggregate of SMBs. each of these is an organization whose entire IT budget is likely to be under one million USD, and that's on a good year, when they can afford an IT budget and don't decide to handle technology expenses on an ad-hoc basis (you'd be alarmed how often that happens). someone mentioned a savings of 500K british pounds. this figure often exceeds an SMB's entire IT budget, and in many cases, exceeds the entire organizational operating budget for the year.

    other arguments include "thousands of corporate lemmings can't be wrong" (please examine any of the technology bubbles of the last 20 years to see how this reasoning fails), and "statistics indicate that it will be up forever" (MTBF statistics have a more tenuous relationship with reality than most people like to think).

    the one application i can think of, that would be most compelling from a cost-benefit perspective, and involves minimal complexity, is to have two large, beefy, multi-socket and multi-core boxes running fail-over VMs for many smaller physical dedicated single-app VM servers. that way, one gets close-to-unvirtualized performance on one's dedicated hardware, but each dedicated box still has a VM host to fail over to, if needed. this setup may also offer benefits from a licensing perspective, depending on the software used.

  19. Daniel Ballado-Torres

    Virtualisation not magic

    Myth: You'll do big time savings by cramming, say, 10 servers into one.

    Fact: Your 10-in-1 server will be 10 times SLOWER than the machines by themselves, unless you put in 10 times more hardware on that big mammoth.

    I think VMs are basically for legacy or dead-tree software, which must be kept running but it doesn't deserve the dedicated hardware by itself. Or maybe webservers that don't eat too much processor. But for heavier loads, I would not virtualize unless you're talking about a mainframe.

  20. Andy Bright

    Two things

    First is redundancy - consolidating onto fewer servers is good, and we are in the process of doing exactly this through VMWare - but just because we want to reduce the number of servers we use doesn't mean our actual investment is going down - it's going up.

    Mostly because consolidating too far means greater risk if one system fails - therefore you need redundancy, therefore you still need multiple servers, and if your budget is constrained, that means buying lower cost servers. Maybe not 2 processor, but I think those will become a thing of the past, just as single processor servers are now pretty much non-existent outside small to medium sized businesses.

    The second thing is why on earth aren't Sun jumping on the VM bandwagon as Microsoft have done. You're telling me that Microsoft can produce virtual server software and Sun can't? You're telling me that Unix based systems, the first non-mini/mainframe servers to use multiple processors can't be adapted to run virtual systems? I think someone has lost some code because I know it's been done before.

    Sun have no reason to be scared of the future if they move with the times. They can quite easily adapt their expertise in server operating systems to virtual machines. Personally I would have more faith in a Sun product than anything from Microsoft.

    If Sun produced a complete multi-core, multi-processor system, with virtualisation software as part of their OS, I have no no doubt they could become a huge player in this market. Certainly they have more respect from the system administrators that are still in charge of buying these systems than Microsoft could ever expect to get. VMWare may be the benchmark right now, but this is still a relatively new market and there is plenty of time for Sun to produce their own product.

  21. James

    Re: VM is the future

    [why have Intel and Cisco already invested over 400 Million in VMware]

    You said it yourself: "Invested"

    It's a cash-shuffle ... they throw money into a much-hyped technology, ride the hype wave, and cash out. It doesn't matter whether VM is, in fact, the future ... as long as the hype holds up long enough to turn a profit. No harm, no foul, big profits for the shareholders. VMWare is the big gorilla on the scene, and backing Xen doesn't make as much sense from the financial point of view.

    I would feel more confident about VM becoming the future if there were more competitors offering solutions that worked well.

  22. Steven Hewittt

    Context

    VM will have a large part to play, but I doubt it will have a huge impact on the tier 1 vendors bottom lines.

    SME's will / do use VM's for admin reasons alone. E.G. dev and testing environments, legacy support, light-load requirements (intranet etc.)

    SME's don't have the capital to spend £100k on a mainframe or two, or even £6k+ on a powerful server to host VM's - it's just too much money.

    At our place I'm now putting in Team Foundation Server. I used a VM as a lab, did some playing, broke it a lot. In the mean time our real server was on the back of a van being shipped to us. Now I've saved time by learning about TFS now, rather than breaking a real server for a fortnight. Our VM box is just a old Dell 1850 with 6Gb of RAM and a dual core xeon. Trial vista on it, have the Orcas / VS2008 beta's, Silverlight dev platforms, a couple of permenant testing boxes and then anything else is Ad-Hoc.

    It's about flexibiltiy and time saving from a SME point of view - we don't have £50k to spend on VMware, a SAN and a couple of monolithc boxes, let alone the expertise. We'll be using real boxes for real loads, and VM for test, legacy and development.

  23. Fazal Majid

    Title

    I run Solaris zones in my work environment, and the flexibility it brings is remarkable, with much lower overhead than VMWare. We have a complete logical replica of our production environment running on a single machine. Each zone has its own root filesystem, IP address and password file, but they also share common resources, e.g. /usr/local.

    Virtualization is not at the expense of resiliency - I moved our entire intranet from one machine to another in 5 commands, because the zone roots are stored on iSCSI and it is trivial to detach a zone from one machine and move it to another. I even rent a zone with a guaranteed 1/8 of a machine's resources from Joyent for my personal colo needs, I have full root access on it and can install any software I want, while not having to pay the full price of a server.

    Zones are Solaris-only (or Linux apps usiing Linux libraries and userland on top of a Solaris kernel in the newest OpenSolaris release with branded zone support), but I expect Longhorn Server's virtualization will have even more of an impact. That's because Windows Server users routinely dedicate a server for an application because putting multiple apps on a single server in the Windows world is a prescription for DLL hell and instability. Lightweight virtualization should drive a massive reduction in management overhead for Windows shops, and correspondingly high savings from Windows support staff cuts.

  24. amanfromMars Silver badge

    Seventh Heaven Stairway....... in Easy AI Steps.

    Step One ..... Always look on the Brighter Side.

    Excuse me, but ........ as Virtualisation is a Embedding Methodology, for both Software and Hardware to Better Beta Perform as a Future Deliverer/Driver, an OS crash is never going to be a problem for in such a case all that may be lost, if there is no back-up, or if back-up has failed, is memory/history.

    Virtualisation and its IT Technologies are not memory driven or dependent, they build upon Imaginative String Theory in their Delivery of Future Operating Systems...... After all, they do Virtualise and present to the Present, Hardware and Software Solutions which are not Physical/Tangible but MetaPhysical/Intangible in the Intellectual Property of their Creators and thus will always be available for Rebuild and Reference for Rebuilding of crashed Systems.

    And Mr Jonathan Schwarz is missing the whole point to suggest that Sun's declining server revenue is attributable to Virtualisation whenever InterNetworking to Share ITs Methodologies is the very Heart of ITs Sigma Protected and Protecting Protocols.

    If ever there was a time to reiterate that Gage adage "The Network is the Computer" ..... http://blogs.sun.com/jonathan/entry/the_network_is_the_computer .... it is now with Virtualisation, ITs Driver.

    Wake up, Jonathan, to the Sun Shine on you crazy DeaMont. And you can parse that anyway you like positively with the Godisagoddess algorithm. This is no time for you to lose enthusiasm in ITs Powerful Sexual Energy and Control Driver.

  25. Nexox Enigma

    DLL Hell?

    Fazal - The only people that talk about DLL Hell are the ones that haven't learned anything about Windows since some time mid last decade. Multiple apps on one Windows box might be unstable (I've not had stability problems on Windows since 2000sp3 - I've pulled 3+ month uptimes on my desktop machine running 2003, interrupted by power outages and driver updates) but more than one application running at a time doesn't really affect the DLLs. They really haven't been a problem since MS moved off the 9x kernel.

    While Zones and VMWare do sort of similar things, what I've read of the two leads me to believe that they aren't really worth comparing - sort of an apples and martians situation. Plus, who wants to use Solaris? I've never sworn at an OS so hard in my life - and I have to use OS X on a daily basis.

  26. Andrew Inggs

    Virtualization for easy of management rather than consolidation

    A recent Slashdot post covered an article at Interop News by Jeff Gould called "On the rPath to virtual containerization" [1]. Gould argues that virtualisation's the ease of deployment, migration, backup, etc. will actually *increase* the demand for server hardware over the long term. He offers Intel's recent investment in VMware as support for this view. He then goes on to discuss rPath, which allows ISVs to build full-stack software appliances, all the way done to the OS. rPath uses a trimmed down Linux (as small as 50 MB), significantly reducing the attack surface of the final product as well as the maintenance overhead. rPath is run by Billy Marshall, with RPM author Erik Troan as CTO; both ex-Red Hatters.

    Whether virtualization can actually increase the demand for servers or not, I agree with Ashlee that the demand for servers will not go down. I think the consolidation-by-virtualization trend is having a short-term impact on sales, but that will only last as long as there is inefficiency in the data centre to exploit. After that, unless the overall demand for more computing power is stopped -- and I can't see why it would -- server sales are sure to pick up again.

    [1] http://www.interopnews.com/news/on-the-rpath-to-virtual-containerization.html

  27. Anonymous Coward
    Anonymous Coward

    Amuzing

    I'll caveat this by saying that I work for VMware. The opinions expressed here are my own and do not reflect the opinions of VMware.

    I find the comments and the article rather amuzing. People talking about how all of their apps are multi-threaded and so they should run on physical rather than virtual machines. Really? I would like to meet you. Most of the apps running in datacenters (large and small) are single-threaded. Writing a multi-threaded app is rather difficult and something not taught until you get towards the latter part of your masters or PHd in computer science. Take a look back at your x86 apps in your datacenter (not the stuff running on Solaris on SPARC or your P or Z series) but the actual x86 stuff. How much of it is truly multi-threaded. For the stuff that is does it actually scale linearly? Most don't. And for that matter almost all of the virtualization solutions for x86 today have SMP support for the virtual machines.

    Notice I also talk about x86. That's the market this article is talking about. For those of you talking about running everything in LPARs on the mainframe or in Solaris containers - great - if your apps will actually run there. Again, in an x86 dominated datacenter space you're going to have a tough time getting your apps to run on the mainframe or actually be supported there.

    Then there's the talk about how everything should just run in chroot or some other esoteric Linux solution. If you're a 100% Linux shop that just may work for you. What about all of your Windows stuff or your NetWare stuff? You may laugh but that's still running in your datacenter. And if chroot and containers and other Linux solutions that have been around for a long time were really that great and could be operationalized then why haven't you been running your datacenter like that already? Hmmm....

    My last comment is for the performance junkies. You know who you are. The people that say virtualization provides too much overhead or stacking 10 apps on a single server makes things run 10 times as slow. I've done countless performance collections (well over 2,000) on datacenters around the world and the results are almost identical. over 90% of the x86 apps in the datacenter run under 10% utilization. So why do you care if the virtualization solution runs at 90 or even 80% of native when your app only runs at 10% of native. Perhaps we need to do better at teaching math in the schools. And this only gets worse as you start adding more cores and processing power to servers. Your apps use even less.

    The biggest thing holding more effecient datacenters back these days are ignorant posts like the ones found here. Virtualization has been around for over 30 years thanks to IBM. It's not something new to be afraid of. We're simply taking tried and true solutions to the x86 space. Start educating yourselves and think about what you write before you post.

This topic is closed for new posts.

Other stories you might like