While it is unfair to say (as many vendors do), that server virtualisation will take over the world during the course of the next fifteen minutes, we know from the readers of The Register that ever-expanding numbers of virtual machines (VMs) are being spun up by organisations large and small. A primary driver for early server …
Too long, didn't read:
"The primary benefit of virtualisation is ease of provisioning of new servers. Use it."
Tony, you say that "A good starting point is to monitor the physical resource consumption of the original servers hosting the applications over their typical work cycles" which I agree with.
The problem comes when translating this to the virtual world:
Provided your application is distributable (if not, why?!?!), you will have to break a monolithic physical server into smaller pieces, then scale up the number of servers used *as a reaction to changes in demand*. Remember, getting a clone of a server (should be) is a simple click of a button, not a protracted ordering of hardware followed by installation woes. Why not use this?
The way to move to "the cloud" is by breaking up your apps and scaling according to demand.
... I suppose Monolithic Servers that are required for a short period of time will still fit the bill, but an always on VM that takes up most of a node may as well simply *be* that node.
don't forget the host platform
Funny that monitoring the host platform isn't mentioned at all, everyone's head must be stuck in a cloud. While the settings of a specific app on a monolithic server will read one thing, your new virtual platform should have multiple cpus and wads of ram to support those virtual servers. Exceed its capabilities and more than just a single application could go down.
I run 14 VMs on 1 quad core with 8GB of ram. While any one of the individual VMS won't impact load, gradually all of the systems tend to eat up ram and swap, eventually leading to system thrashing. A simple reboot of the host system takes care and gradually I'll move some of those systems to a new vm host. hopefully...
Common sense really applies. We all know that servers generally don't need anywhere near the resources allocated to them 95% of the time - hence why virtualisation is popular.
Bog standard Windows 2008 box generally has 2Gb RAM "allocated", 1 CPU assigned and shares a 4Gbps pipe with about 8 other servers. Anything a bit heavy (multiple roles, maybe an appserver for Oracle forms etc.) then it gets another 2Gb RAM and another CPU.
For managing the resource then VMWare VCentre does all that for me really. I split the farm into 3 priority groups and technically any spare resouce can be allocated to a demanding VM guest on demand should it require more juice.
If the host doesn't have enough then it gets moved via DRS to another host who can provide the relevant power...
- Batten down the hatches, Ubuntu 14.04 LTS due in TWO DAYS
- FOUR DAYS: That's how long it took to crack Galaxy S5 fingerscanner
- Did a date calculation bug just cost hard-up Co-op Bank £110m?
- Feast your PUNY eyes on highest resolution phone display EVER
- Wall St's DROOLING as Twitter GULPS DOWN analytics firm Gnip