Has a Blog Bot kicked in?.....
Who re-posted this article? It seems to be from 2007.
Picture the scene. You’ve run your legacy infrastructure into the ground. You bought it six or seven years ago with a view to depreciating the hardware over four years, or perhaps even three, so it’s done its time and then some. Now it’s starting to get flakier than you can live with, and as your channel partner’s spares supply …
Glad it's not just me.
"Is it the right time to virtualize?" - er... no... that time was about 10 years ago, mate.
To me that's at least three hardware cycles, servers and clients.
Sure, network virtualisation is still a bit "what's that?" to most places, so I can understand not touching it. But servers - yes. Storage? Depends. Most people don't do a lot of storage, but the article is aimed at datacentre (or so it says).
But "should you be virtualising your old crusty servers?" Hell yes. Unless you are in HPC or similar, of course you should. You should have been doing it for YEARS already.
The sort of places that haven't virtualised are (as Dells article suggests) not actually large companies running nice neat 3 year replacement cycles with large budgets and large support teams. It's a completely different world, effectively.
I work in such an enviroment. Due to there being very little difference between revenue and operating costs, we don't get much to spend on IT. That, and the fact that money spent on equipment essentially comes straight out of the bosses pockets. Point of fact, if IT here was an employee then it'd be taking the employer to court for not being paid the minimum wage.
In this sort of enviroment equipment tends to run until it breaks. My oldest hardware is the telephone system, which turned 20 a few months ago. I do have a couple of unvirtualised servers left, which have been running for no particually good reason, beyond the business not wanting to spend the money on extra hard drive space and memory on the VM host to virtualise them.
Well, to be fair one of them does the voicemail (a 2k box) which uses a pair of x4 modem cards for presenting a POTS interface to the public to pretend to be an answer phone, and another for staff home access. Modem cards aren't that easy to get for modern hardware for some reason. Getting that to work after being virtualised probably isint' worth the hassle, especially given that it also talks to the telephone system over a serial connection at 9.6kbps logging SMDR data. The others don't really have any excuse for still existing though.
It can be a somewhat frustrating enviroment to work in, but it pays a lot better than being a lowly 3rd line tech on an ITIL helldesk and the day is always interesting and varied. And if it isin't, I can endlessly amuse myself with various projects or reading El Reg. Like now. Ho de hum.
Virtualisation certainly gives flexibility but it can go horribly wrong in a high usage environment.
The example in the article says you could let the DB server use the extra memory the Mail server doesn't currently need. That of course is true but what happens if the e-mail server wants it back? I've seen a lot of problems with over commitment of resources.
If you've got a high usage DB server it needs predictable access to resources and given the price of hardware its often best to just give it a dedicated server. After all if the DB server needs more memory buying an extra XX GB of memory is not expensive.
I think there's been a bit of a cycle: First everything was virtualised and then people began to see which things actually benefited from it and which don't. So, in most mature organisations you end up with a mix.
"I work in such an enviroment...."
And yet, by your own admission, even you only have a few bits of unvirtualised kit left. You are self-professedly at the bottom of the pack, in a place where IT spend is minuscule and every chunk of capex must be hard-fought for... and even so this article is still basically irrelevant for you in 2017, since it's advocating something that you've managed to push out despite the enormous budgetary constraints that you're forced to work under. For everyone else, it's borderline archaeology.
Virtualisation is no silver bullet. I probably virtualise about 95%, but there's always a fringe case.
Good example is wanting to have HA for a service rather than just for a VM. In which case you'll need application level HA. Application level HA takes the wind out of virtualising somewhat, as you'll likely gain no benefit from the VM being HA (as you'll have multiple VM's running the service), and also likely that each VM will have it's own copy of the data - meaning you're taking up a sh!t tonne of space on your very expensive SAN.
Using Exchange as an example;
Lets say you use Exchange for the application level HA via a DAG. This is great, but means we're getting very little benefit in terms of HA by running as a VM. I've got two Exchange servers running on separate VM hosts, but regardless if the VM crashes or one of the hosts dies, Exchange keeps running... Exactly the same as if I had two physical Exchange servers rather than virtual.
Now running our imaginary Exchange servers as VM's, we're now making things expensive, as each Exchange VM has it's own copy of the mailbox databases - so that's twice the data I'm storing on the expensive shared storage SAN.
Things like Exchange can eat up your storage, to the point where you may need 10TB for each Exchange server, yet your SAN is only 30TB in total. So that's 2/3rd of your SAN used up straight away, and the cost per TB isn't usually that great with the typical SAN.
All of a sudden, you take a step back and realise that it would be considerably CHEAPER, and you get the same levels of resilience if you just buy 2 x DL380's or 730xd's and use local cheap storage instead.
You then have a load of extra capacity on your virtual hosts, plus your SAN has oodles of space left on too, saving your a fortune as you don't need to ask the FD for another £20k for a new shelf on your SAN that you brought only 6 months ago.
One big benefit that the article didn't touch one was how much easier virtualisation can make backup and DR. Agentless backup solutions (like Veeam, others are available) just rock my world - if it's on the virtual cluster it's getting backed up. No agents, no tapes, super fast restores.
After providing hardware HA for a server, it's the portability, manageability and backups that make virtualisation a default choice personally - but still it's not for everything, everytime.
P.S On a side note, who the hell is working in IT in 2017 and doesn't know where to start / hasn't used any virtualisation? Seriously?!
VM's migrate. Physical servers don't. You can send them to another datacentre, onto a server you've never touched before and it'll work and keep running like you'd never switched it off. That's a real bonus that you hope you'll never have to utilise.
VM's make better use of server resources. All those "spare" VM's that aren't actually doing anything can sit idle on servers that ARE doing lots of other things. That stupid VM sucking up Gbs of RAM for no real purpose or usage can be pushed back to swap while the ones that need it can use the hypervisor's real RAM. Few things use a lot of CPU - Exchange uses almost nothing, so it can co-exist with VM's that are CPU-heavy, but IO-light.
Additionally, yes, VM backups are SO MUCH NICER. No more faffing by cherry-picking system state items and hoping that you can replicate the config should that fancy network card blow up and you need to put it on something else. Just backup "the machine", with every configuration on it and every setting and snapshots of the historical settings. Done.
VM's also snapshot and replicate: snapshot the live server, spin up the replica in a test environment, play with ALL the settings and break things, and know you can roll-back to known-good instantly even if you made a mistake. Sometimes in seconds. And being able to "splice and test" like that is invaluable. "What WOULD happen if we upgraded that primary server to the next version of Windows?" - don't guess... do it... see what happens, just by branching from a snapshot of that EXACT server. Delete it when you're done, or push it back into production.
And redundancy costs twice the resources (or more) because it's redundant. That's the ENTIRE POINT.
VM's are the only thing I'll use now. The only blockers are those stupid things that DEMAND a certain piece of hardware (e.g. dongles, etc.). Everything else, you get a VM. I run CCTV NVRs from inside VMs and they work perfectly. And it's cheap to spin up a VM every time someone says "the guy is here to install the software for X". Don't faff - just give them an entire VM to do whatever they want in, and then put that VM on your network. They can have no argument then about "Oh, well, it's not compatible" or "it's because you have X installed", etc. Most of my vendors are offering their appliances (e.g. webfilters, firewalls, etc.) as VM images now.
VMs and VLANs are the best thing to happen to in-house IT in decades. It's literally makes your network portable, to the extent that there's one backup device in my pool which is just a cheap NAS, large enough to hold and offer every VM out over iSCSI.
I could take that box. Find ANY decent server hardware. Load those VM's. Boot them up. Have EVERYTHING running as it was in under a day. Literally my entire network in a box. And - in theory -the only thing I'd need to get running on new physical hardware is a way to load up the one VM that's the hypervisor to all the other VM's. Nested hypervisors are cool.
There's another compelling reason to virtualize: Business Continuity / Disaster recovery.
There's at least two applications that I'm aware of (VMware's Site Recovery Manager, and Hypervisor agnostic Zerto) that basically keep a spun down replica of your production machines synchronized to 'warm' DR site. The downside is that you need a reasonably decent pipe sized for your workload, and storage to hold the replica at the DR site. It also requires a warm DR site, as the site needs to have live hosts, storage, and a couple VMs to handle the replication nuts n bolts. Once it's configured and running properly, it's a wonder to behold- our annual test went from a full day to about 90 minutes to spin up a fail-over test, and another couple -three hours for the application owners to do their testing.
We started to virtualize our environment 8 years ago, and did a hard push four years ago, and haven't looked back.
But it was mercifully free of any blatant advertising. :)
I would like to doubly emphasis 'design the environment BEFORE YOU START.'
I could probably write up a forum post with at least my recommendations and point of view from owning the virtualization environment at my company for the past five years and two hardware iterations.
...You’ve run your legacy infrastructure into the ground. You bought it six or seven years ago with a view to depreciating the hardware over four years, or perhaps even three, so it’s done its time and then some.
"Legacy"? You ageist wanker. Just wait till they get to be drinking and/or voting age then let the descriptors fly! <snicker>
Biting the hand that feeds IT © 1998–2019