The religion of virtualization may be all over the IT trade press, but apparently the data centers of the world haven't heard the good news yet and have been buying up PCs, servers, and storage gear like crazy to support their application and data loads. And if data from British IT consultancy Compass is any guide, IT budget …
... & I though they just did the (pricey) in house catering for BT!
Mines the one with the sandwich in the pocket, it's all I could afford!
What A Joke!
Virtualization is overrated . There's a reason it's not being heavily used, because it's not as reliable as tried & true hardware. VM is great for backups and as tertiary failovers (after a hardware failover has failed), but should not be used to replace hardware.
I agree with @Joey
Virtualisation is way overrated. And often costs significantly more than discrete hardware with a decrease in reliability.
Take my environment for example.
I work for a (non-UK) government department.
Our Sun servers are outsourced.
Our maintenance costs for a T5220 is something like 3k/month for the server, plus 2k/month for each solaris zone (virtual server). Therefore for a piece of kit with 3 VMs on it costs 9k/month, which is made up of 3k/month for the 'global zone' (the administration zone that only our outsourcers have access to) plus 2k/month for each of the VMs.
Previous to using VMs, it cost a flat 3k/month for each server. Therefore having 3 physical servers cost 9k/month.
Now of course, if we have physical hardware failure we have lost 3 zones, what was once 3 physical servers, for the same cost.
Whie you might say that there is more flexibility, in that there is the ability to assign unused resources from one zone to another, while in theory there is, it costs $$ to get the resources assigned. First, someone has to prove we need resources moved from zone A to zone B (say 20 man-hours), then they've got to convince their management to send a request to the outsourcer to move the resources. Then it's got to go through change control (another 5-10 man-hours of various people's time), then the outsourcer, 4 weeks later, charges 4k+ to type a few commands on the command line to move the resources...
From an administrators point of view, manageing 10 VMs is exactly the same as managing 10 physical servers that aren't VMed. It still takes the same amount of time to ssh to a VM, check logs/restart an app server/truss processes/configure something/deploy something as it does to ssh to a physical server. I can definitely tell you that labour costs of system admins managing a server, whether it is physical or virtual, are far greater than labour costs in asset management or data centre costs. And with the virtualisatoin hype, management (on advice of the outsourcers of course) are going to create shedloads of VMs, costing more $$$ than having a few physical servers.
Using VMs also increases complexity and other maintenance costs. Does this new patch work in a VM? Do we have to keep VMs patched? How much does it cost to verify the software/patch that works fine on a bare-metal server also works within a VM on the same hardware?
What about tracking down problems? What is the real physical interface traffic is coming into/out of because in the VM we only have visibility to the virtual interface? How does it complicate our routing tables on the server to ensure that data leaving VM A has a source IP address of A while traffic leaving VM B has source of B, rather than each one randomly picking a physical interface to transmit out from?
Virtualisation just increases costs and decreases reliability.
If it were not for BOFH and the PFY dumping some of the older unwanted but perfectly serviceable hardware into a 'recycling' skip on occasion we would be short of at least one server, two monitors, a scanner /printer /fax combo... you get the picture
I Wouldn't Hire These Guys
These are second-hand comments, based on the summary in this article rather than on the original but ...
1. Over the past four years, the companies that Compass studied saw a 5 per cent drop in the cost of PCs, but the number of machines they acquired rose by 18 per cent.
PCs have dropped more than 5% in price in the last 4 years. Where are these guys buying PCs?
2. Ditto for servers, which had a unit cost reduction on average of 66 per cent at these customers over four years, but volumes also more than tripled.
Any company I have analysed significantly underuses its Windows servers. Bear in mind that the penetration of virtualisation is still much less than 10% of servers. So companies are not looking for consolidation.
An 18% increase in PC numbers is not a capacity issue. It is a people issue.
3. Pacileo cited one Compass customer, a large commercial bank, that had a core banking system with 127 interfaces to a slew of applications; Compass recommended that the company stop adding their own interfaces to the software every time they added an ancillary application and buy a new core banking system, which required only 33 interfaces to be supported by the development staff.
Way to go. The most risky, costly, delayed and ultimately the type of projects than fail most are those involving the implementation of new core banking systems. With advice like that, these guys must be looking to offer a range of expensive project governance and other services that will fall out of such ill advised projects.
How about other options: SOA, message broking or other similar inexpensive options that will deliver real benefits more quickly?
4. "Organizations that effectively rationalize their application portfolios reduce overall spend by 20 to 40 per cent, enhance quality and organizational agility, and reallocate savings to implement more innovative and competitive solutions," Pacileo says.
Application consolidation relates to reducing the number of servers or server images on which applications run.
Using a large number or variety of applications can add unnecessary overhead to both application use and administration. Consolidating these applications can reduce this overhead by making applications more easily accessible to users and administrators, and also by decreasing the needed amount of resources, such as server time.
It differs from server virtualisation and consolidation because this retains the same number of server images but reduces the number of physical platforms on which the servers run by encapsulating the previously physical servers as virtual servers that run as applications managed by a virtualisation hypervisor.
Server virtualisation on its own may not be the best solution. It can mask underlying problems. The result is the same number of server images and applications, just not physical servers.
A comprehensive application and infrastructure consolidation view allows organisations see the bigger picture to identify wider set of cost savings opportunities. It identifies all the issues and can provide a business case for investment. It also provides a checkpoint before selecting implementation vendor.
Many of the approaches to application consolidation involve application and underlying business process changes:
• Replace old applications with newer one
• Consolidate many small applications with smaller number of more functional applications
• Replace existing individual applications with components of larger systems (such as SAP)
• Modify business processes to eliminate the use of applications
• Redevelop custom applications to use Web application server infrastructure and consolidate onto small number of shared Web application servers (WebSphere , WebLogic, etc.)
There is an cost associated with these changes that is probably substantial. Also, the timescale to implement application changes is long.
You can look at simpler options: SQL Server consolidation (using Polyserve), File and Print server consolidation using NAS (NetApp.IBM N Series, etc.), Citrix (use AppSense), etc. They all deliver benefits more quickly and incrementally rather than risky big bang projects.
re: What a Joke!
Well Joey, how does your statement stack up against the 'king' of VM's, the IBM Mainframe. These venerable systems have been running VM's by the score/dozen for years and in Production environments.
If you are referring to the sorts of VM environments available today on X86 type Hardware then possibly you are correct. But, (and there is one isn't there) what about systems that need to be run standalone and have little CPU Intensive traffic. These can be run in VM's on existing systems.
Many companies have a few really old legacy apps that they just can't get rid of. making VM's of these makes real sense especially if it cuts down on the leccy bills a bit.
VM's have their place and the size of this place is growing.
If they haven't got enough power to run their applications on the hardware they have how is virtualising parts of their stack going to make things any better? Other than impose more overhead and administration issues? Running 10 applications + 10 Oses (One for each application) is always going to be slower than 10 applications running on one OS.. We need more container solutions (VServer, Sun's stuff) and less attempts to make x86 hardware do things it really shouldn't be.
Jokes on you pal...
@Joey: "Virtualization is overrated..." Erm, maybe I've missed the relevant Powerpoint but I don't remember any VM vendor saying that their VM software makes the underlying hardware more reliable - that's a snake oil claim that no-one would believe. What VM's _DO_ let you do is make more/better use of that expensive hardware that you're spending your IT buck on. Although I did just come off of a project where the VM solution was used to provide extra fault tolerance (where the logical resources being provided to the client VM's were actually multipathed at a lower level - the fact these logical resources were shared made it cost-effective to provide failover resource).
Consolidation is where VM's are "at" - and this means less expensive hardware out there or, better still, spend the same amount to get "better" hardware - and this 'better' kit usually comes with a shedload more Reliability-Availability-Servicability (RAS) features than the lower kit.
There are some good docs available on this from IBM (Redbooks and the System p sites) - definitely worth a read if you want to see how good a hardware assisted VM environment can be.
Small disclaimer - I don't actually work for IBM (actually one of their competitors) before someone out there accuses me of 'tech pimping'. ;)
Meanwhile to the main part of the report - would think businesses are buying a lot of kit because the vendors are currently trying to punt it out at 'fire sale' type prices - it'd be foolish not to take advantage...
VM promises much
But at least in the enviroment I look after it delivers a lot less
We had a change of management a couple of years ago, this new guy came in and installed a SAN and VMware ESX to our environment
Every server was moved to the new platform consisting of 3 quad core dell 6850s.
Everything slowed down exponentially compared to previous infrastructure, especially SQL, and before anyone says it I know that this scenario was confirmed by the reseller as 'ideal' and we fell for it.
I am now in the process of moving the core infrastructure back to physical servers (at least our 2 SQL servers and exchange) as the performance hit is too high.
The onlly thing our setup does do is simplify backups.
Overall a nice idea but on i386 platorms doesnt seem to be workable
Isn't it a bit daft to assume that virtualisation will mean that you will need less hardware? Does anyone really think that?
I mean, x server-side applications will generate total load y whether they are on x different servers or one big one. So you'll still need lots of computer kit, either way...
Seems to me your biggest gripe is with the outsourcing and that is not a surprise at all
Paris? She only "outsources" when she needs to
- Mounties always get their man: Heartbleed 'hacker', 19, CUFFED
- Samsung Galaxy S5 fingerprint scanner hacked in just 4 DAYS
- Feast your PUNY eyes on highest resolution phone display EVER
- Analysis Oh no, Joe: WinPhone users already griping over 8.1 mega-update
- AMD demos 'Berlin' Opteron, world's first heterogeneous system architecture server chip