back to article Is server virtualization delivering for you yet?

In one of our recent Tech Panel surveys, conducted towards the end of 2008, we asked respondents to call out what they felt were the most positive trends in the IT industry during the previous 12 months. In answer to this totally unprompted question, the word ‘Virtualization’ came up more than four times as often as any other …

COMMENTS

This topic is closed for new posts.
Thumb Up

Simply put

3:1 consolidation.

I can say that with confidence as we have three racks sitting in our server room, and since we virtualised half our servers, if we wanted to we could put everything in one rack with room to spare.

And we can easily virtualise more if we like. And that doesn't even count the hardware savings made in downgrading the specifications of desktops, if we move our users to a virtual environment (which will almost certainly happen within the next five years).

The only blockage at the moment is cost; once it is cheaper to virtualise all our kit than to maintain the existing setup, we'll probably end up with one massive NAS and virtual suite, and the only external remaining part being backups :)

Let me know when I can virtualise the users themselves too please...

0
0
Thumb Up

Small company benefits?

We are a small company, around 50 employees. We've used VMWare Server (and before that VMWare GSX Server) to consolidate hardware, simplify server maintenance, allow the introduction of alternative operating systems and dramatically boost the effectiveness, speed and reliability of our disaster recovery solution.

For hardware consolidation we have 15 virtual servers (Windows/Linux) running on only 3 Dell Poweredge servers. That's MS Exchange, domain controllers, file servers, web servers, print servers, etc.

Server maintenance is helped a great deal. The sysadmin can try ill advised "upgrade" ideas on an offline copy of the production server before trying it on the real thing. That's helped us avoid quite a few problems in the past.

We had no Linux servers before virtualisation. The skills to manage Linux running directly on the hardware for mission critical services just weren't there. Nor was the confidence (in some quarters) that Linux was suitable. Virtualisation has allowed us to introduce Linux gently. It's allowed a server to be built which the sysadmin could treat as a "black box", starting, stopping and backing up via the familiar Windows UI. These days (several years later) we have in house Linux skills and Linux running directly on hardware for our most important mission critical servers.

The benefit of virtualisation to our disaster recovery solution can't be overstated. We backup virtual machine folders on to disk and tape. Simple, fast, no expensive "backup agents" or other complexity required and can be restored onto any hardware. To restore our entire Exchange server from backup is just a case of dragging a folder in Windows explorer, compared to the previous hell of being faced with a "system level" backup tape and some hardware and weekend in the office.

I can't imagine being without VMWare.

0
0

Virtualisation = consolidation

I work for local government, and we have consolidated close to 25:1 on x85 Windows servers over a 2 year period.

We also have been re-deploying virtualised hardware servers (if decent spec/age) instead of purchasing new hardware servers.

Our supplier isn't worried tho, as they make plenty profit selling us SAN certified storage instead of app servers.

0
0
Thumb Up

aye

Yeah, it's great. We've squished over 50 intermittent-use/low-load internal servers into 6U of space. Love VMware ESX to bits.

0
0

I'm missing something

If you run a decently loaded db server (fer example) you'll typically want it on a real machine of its own. Putting it into a virtual machine isn't going to reduce the load so sharing it as a vm with other vms on a real machine isn't going to help [*].

Ditto Exchange which I understand is a pig (never used it). etc.

If you do though run lightly loaded apps then you don't need a VM to run several of them on the same physical server; you just install them directly.

For managing resources, then, VMs seem to gain you nothing.

So what's the win? Do departments just get sloppy with resource management or what?

I know there's other stuff like running legacy apps (gray_ said useful stuff, thanks), but still. I can't understand how e.g. Dez_Borders could possibly get 25:1 without the original setup being a total wasting mess from the start. Even 3:1 per Ian Ferguson can't have been a good setup.

Perhaps you're virtualising desktops but MS terminal services (AKA screenscraping with licenses, it is MS we're talking about...) worked fine for us for some testing.

[*] More than likely it'll make it worse as the app/OS suddenly can't monitor real performance so starts fighting with other apps, in fact.

0
0

Right time right place

Between 20:1 and 50:1 using a VMCO Appliance and a free hypervisor (Citrix)

Main benefits have been obvious power, HW maintenance, and space savings, plus increased disaster recovery/continuity as workloads may float between physical appliances. Capacity planning is now a cinch and I can promise the business with certainty that a given spend will result in corresponding increase in processing capacity.

HZ

0
0
Happy

SME gains

We're a SME (140 staff) and our server virtualisation project has been very successful.

4 main servers (DB, 2 email and web app) and 24 small servers have all been consolidated onto 3 Vmware ESX servers plus a management server. 28 on 3 or 4 if you count the management server.

We virtualised a mix of operating systems including Novell Netware, SUSE Linux, Windows SBE 2003, Windows XP and FreeBSD.

Our savings are principally on space, power and cooling. Our gains are on DR, flexibility, management and the ability to add new virtual servers at no or little extra hardware cost. A bit more memory would help but what's new?

The big driver for virtualisation was DR. With the previous mix of physical servers meaningful DR was impractical. Now it's easy.

We have lots of operational flexibility. We can test software upgrades to production servers by taking a copy and testing in a separate virtual environment and a snapshot before doing the upgrade in the production environment. Belt and braces.

While we have 3 ESX hosts 2 are enough to run our production systems so should a host fail or need maintenance we can cope with minimal disruption.

This all sits on an EMC SAN which is working very well too.

We've been able figure out much of this for ourselves with the help of the various on-line forums.

I'm happy.

0
0

@BlueGreen

25 apps on the same OS install, with overlapping ports, libraries, web servers, drivers.... one vunerability on 1 app and a hacker has all of your infrastructure, nice. Need to do a hardware update, no VMotion, just your entire business down while you re-install 25 apps. Have a poor app with a memory leak = crash entire business for a wihle, instead of 1 app down. There are probably a hundred other reasons why vmware is better than 25 apps per 1 physical box.

0
0

Virtualisation rocks

The reason we first considered virtualisation was the ease of restoring a backup: since the hardware is always the same, you can restore a backup straight to a new VM without worrying about Windows suddenly finding New Hardware.

However, after we found out what else you can do with virtualisation, we've gone a long way beyond that. We now use VMWare with VMotion, with 2 beefy poweredge servers and a decent SAN. VMs can be moved seamlessly between the 2 physical servers in order to do things like hardware maintenance or VMWare upgrades, and also to balance load.

We have a third VMWare server off-site (but cunningly on the end of a private fibre link) to which we make vreplicator backups: this means that if our server room burns down we just click on the remote server's management interface and hey presto, all our servers pop up again with only a couple of hours' data lost.

Having the nightmare of hardware failure taken away is like having an enormous weight lifted from you.

Being able to make clones of a VM and test upgrades on it is the second best benefit.

Not worrying about the hardware costs of buying a new server when wanting to run a new service is also a big boon: it's like shifting from pay-per-minute dial-up to always-on broadband.

We're running about 15 VMs per server: a mix of Windows and FreeBSD mostly, some high power (e.g. mail), some low power, but there's still plenty of room for more.

Virtualisation rocks.

Oliver.

0
0
Paris Hilton

Where are these savings coming from?

Working for an ecommerce host running a mix of shares webservers and database servers primarily, I'm amazed at the ratios quoted thus far. There are no savings in virtualising my boxes unless I iscisi booted, which is storage only virtualisation. How are people running boxes running at >5% capacity before this?

Anyhow, I've virtualised my (live) DNS servers, my test servers and non-production development servers which has saved a handful of boxes and allowed for safer development.

My actual production boxes are too busy to splice them together.

0
0
Thumb Up

Never called a vendor for tech support, eh?

>If you do though run lightly loaded apps then you don't need a VM to run several of them on the

>same physical server; you just install them directly.

Which is all well and good until two vendor's packages conflict.

Or you have to tell the people using the other 10 applications you installed "Sorry, rebooting the server, nothing to do with your stuff, it's the other guy's, but it's all on the same box..."

Virtualization minimizes the hardware while still keeping each vendor's tech support happy and minimizing conflicts and single-points-of-failure within a (virtual) server environment.

Not everything is appropriate for virtualization, but a heck of a lot is and it sure makes it much easier to manage machines and also to test solutions before moving them into production!

0
0

good and bad

vmware really has proven to be a magical thing for us. the ability to copy and backup servers and create test environments and disaster recover is, for us, revolutionary. no more mysterious windows hardware crashes. everything stays up. and, if you pay the extra vmware money, it stays up when you really do have a hardware failure. the downside for us? we may save tons of money on the server hardware, but we spend the savings on the software and supporting hardware. yes, you can go the cheap ESXi route and manage everything by hand, but after you play around with it, or when you need to start load balancing stuff, you are gonna want to throw down for the licensed version. and throw down you will. for us, it definitely isnt a cheap investment. and then after the software investment, you will also need to invest in networked disk (NAS/SAN/DAS) and switches. there are lots of people that write about this stuff, and they seem to have infinite cash resources to afford the latest/best hardware. not the case with us. it can be done on a smaller budget, but there is still going to be the up-front infrastructure costs to start down the vmware path. and while you can add a new virtual machine server no problem, once you reach your performance limit, you will need to add new hardware. but adding hardware in our setup comes in package chunks. every four physical servers we buy needs to come with a networked disk. and a switch (or two). and more vmware licenses. that hurts. but then you have to realize that those hardware hosts can handle all those easily manipulated virtual machines and thats when the grinning begins again. yes, we looked at the other vm software, but vmware was the one that stood out. no six ways about it, its great, but expensive.

0
0
Thumb Up

Qemu to the rescue

We needed a Windows based server for one single application. It didn't seem very wise to spend $1000+ on another server that would be idle 99% of the time, especially when the existing Linux server is idle and has tonnes of RAM.

So, qemu went on, and we have a Windows install sitting on our Linux server without anyone noticing. Problem solved. Not a big VM story. Nor is it done "properly", but it's a real use, and it got us out of trouble (well, wasn't really getting out of trouble, as much as adding convenience).

0
0
Thumb Up

Best thing we ever did

We are a SME with 50 employees at head office with various satellite sites using our intranet. Our IT budget isn’t exactly massive so we really do have to watch every £.

Prior to rolling out vmware we had 1 SBS box running everything, and it was nasty. Exchange would grab all the RAM unless SQL got there first. Applications would have clashing port requirements etc etc

After testing ESXi and Hyper-V we plumped for a basic vmware foundation on 2 servers + 1 management server, no high availability etc. We chose this because we could run any OS inside the virtual environment and we knew we could use the free products to give us test and development environments for little/no real cost.

Good points

--------------

Consolidation

Currently running 4-5 vms per physical server and seeing very low usage as we have a small user base 12-15% cpu.

High availability server hardware for EVERY system

Previously we could only afford servers with redundant power supplies or RAID 10 for some systems. Most lived on RAID 5 or 1. With virtualisation we have servers with dual power supplies, multiple network cards and RAID10 with battery backup.

Flexibility

It is now much easier to add a server, the only cost is the licensing. No hardware, switches, power supply issues.

We have also been able to deploy a few VMware appliances to provide additional services i.e. proxy servers.

Dev and testing environment

We didn’t have one before. Now we do. That in itself makes virtualisation a brilliant IT tool.

Bad points

More disk

Didn’t spend enough on the disk storage, and now we have run out. Upgrading this is going to cost lots, possibly more than the initial roll out.

More memory please

We maxed out our memory at the time. Sadly we have used pretty much all of it and again this will cost lots to upgrade.

Snapshots not all they are cracked up to be

If you take a snapshot of a running machine it will not be in a consistent state. On windows the registry is only unloaded to disk when the machine shutdown so any changes would be lost in a snapshot.

Taking a snapshot of SQL server 2008 causes the system clock to wander a few milliseconds out of sync meaning windows authentication wont work. This also applies to other windows servers.

0
0

Our experience in virtualization

I work in Intel IT. We have been studying the benefits of virtualizatoin and have start deploying it at a larger scale since last year. We were planning for a 10:1 to 16:1 consolidation ratio. I believe we are targeting 14:1 to 20:1. My colleague has documented our experience and lesson learnt in a white paper. It is an interesting read. You can find it here: http://communities.intel.com/docs/DOC-1513

0
0
Boffin

For them but not me

I used to work at a huge online betting company based in England, and we didn't use VMWare that much, a few things here and there but overall, virtulisation wouldn't have worked. The web server farms (3 sets of 25 blade web servers in each set) where really going some on a weekend, and these where beasts, database servers top of the range 64 CPU HP half a rack high bad boys really fucking going some.

VM wouldn't have worked, by the time we would have specced out a server to host VM's including future predictions, it was out of date. We where upgrading, installing upscalling servers all the time.

This being said however, I now work at a much smaller company and I am looking to install some virtulisation there as it would massively improve their services, HA, DR and my stress levels. We have a lot of servers at low power, but need seperate servers. When this has all be rolled out, high initial cost, but savings in the long run and a massively better infrastructure.

Different horses for Different courses I think the saying goes

0
0

Intel IT's experience w/Virtualization

As this is my first blog on this forum, I'd like to introduce myself. My name is Bill Sunderland and I have been working at Intel for 13 years primarily working on Server Hardware Engineering and the last four years of which I have focused my efforts on Program Managing the Virtualization Engineering releases for Intel IT. Last year, I published a WP demonstrating the methodology we used:

http://communities.intel.com/community/openportit/it/datacenterblog/blog/2008/04/24/intel-it-deploys-virtualization-how-we-did-it;jsessionid=729FBF170D1B49D7057098BA00B3DA21.node4COMS

While this WP is a year old now, it s still very useful to learn how a large corporation implemented virtualization. The challenges remain 'virtually' the same. As of 2H '09, we have over 1400 VM's deployed across eight major sites worldwide. We are currently on average at 9:1 consolidation ratios with calculated available capacity to achieve 15:1 ratios.

A superb new bit of information is our WP on Nehalem's performance. Here's the WP introduction and link: "Learn about Intel IT’s proof-of-concept testing and total cost of ownership (TCO) analysis to assess the virtualization capabilities of Intel® Xeon® processor 5500 series. Our results show that, compared with the previous server generation, two-socket servers based on Intel Xeon processor 5500 series can support approximately 2x as many VMs for the same TCO."

http://communities.intel.com/docs/DOC-3425

As you will see from the WP, we are anxious to begin our deployments on Nehalem.

There is one more good source of information we have on this topic in presentation format that is currently undergoing legal/proofing before being shared externally. This is due for release in 2weeks. However, i did want to point out a great key learning from the presentation with regards to our comprehensive ROI study we conducted. In short, we analyzed the sources of positive/negative ROI as percentages. 'Our study' showed the following results for deploying via virtualization as compared to normal physical servers:

Positive ROI:

40% Server Capital Reductions

11% Ethernet Switch Port Reductions

10% Power Reductions

7% Rack Reductions

3% Ethernet Cable Reductions

3% Ethernet Cable Run Reductions

3% Deployment Labor Reductions

Negative ROI:

10% Virtualization License Cost

9% SAN Cost

2% FC Switch Port Cost

This analysis really helps to understand where the savings are costs are coming from. As a result we looked at the negative ROI factors and tried to determine if we could reduce these costs even further. We found one opportunity in the SAN cost area and already implemented the change. We saw that we were using expensive FC HDD's for our entire virtualized environment. We noted that 70% of our virtualized environment was pre-production while the other 30% was production. As a result we determined that we could use the less expensive SATA HDD's for our pre-production environment.

I hope this information is helpful!

Thanks,

Bill Sunderland

Intel IT Server Virtualization Program Mgr.

0
0
This topic is closed for new posts.

Forums

Biting the hand that feeds IT © 1998–2017