back to article Server virtualization beyond consolidation

Server virtualization beyond consolidation We asked Register readers in North America about their data center priorities for the next year. We commissioned Jon Collins of the analyst firmFreeform Dynamics to write an independent report based on the findings, which is sponsored by Intel and Dell. Virtualization is big, getting …

COMMENTS

This topic is closed for new posts.
Anonymous Coward

From where I sit...

Couldn't agree more with this statement:

"activity in Windows environments running significantly ahead of that on Linux or other platforms"

I haven't seen the same need/benefit to virtualizing Linux/UNIX as Windows. We can load up our Linux/UNIX boxes significantly more (running multiple application instances on the same OS) where with Windows there is a 1:1 Application:OS ratio, and to reduce our physical server footprint we *have* to virtualize.

0
0
Silver badge

Charts failure?

Maybe it's my Debian box's implementation of PDF reader, but the charts in the PDF have some rather serious issues for me.

Oh, well, as long as I'm posting a comment...

Beyond consolidation the neat thing about virtualized infrastructures is that you MIGHT be able to run clusters where you would run servers before - if you're running your services on an OS that allows that. Doing so with an adaptive virtualization infrastructure that launches more hosts as demand grows and kills them off as it shrinks enables you to design services that scale to the limits of your hardware. If the nature of the service permits it you can even offload demand spikes on rental equipment and rent that equipment by the hour.

The physical server has become not some precious, hallowed thing, but a compute appliance. Some guy you've never met in some place you'll never go flashes the server you bought with the VMHost software that allows it to be managed, configures its IP address, and it shows up as an increase in the compute and memory resources in your console. You might even use a variety of VMHost operating systems, as well as a variety of guest operating systems running services designed to be clustered across differring hardware and VMHost platforms for the ultimate in platform resiliency. Although such heterogeneous infrastructures are the gold standard of redundancy, it's admittedly difficult to implement as you need coders who can deal with shifting platforms. Some of the hosts might even be running some non-standard hardware like deprecated Power architecture servers. If you get more computes and memory with desktop platforms that are otherwise (formfactor, power use) acceptable, you should buy those.

When something goes wrong with the hardware a light shows up on the server tech's board and he repairs or replaces it as needed, but you don't need to know. You build your services as clusters, and the rules prohibit all of the virtual machines that serve in the cluster from running in any one physical machine so if some node is failing the software moves the VMs off it, and if it fails suddenly that's ok because the other services adapt to the loss and the distributed management software re-launches VMs on an active nodes to catch up to the load in seconds without human intervention. Some percent of hosts are down at any given time, but it's not a calamity like it used to be because the SERVICE doesn't depend on any one host. That's what a "cloud" is. The client sends his request into the cloud, and the cloud always responds appropriately. The client doesn't need to know where the server that serviced the request is, or how it operates, any more than the developer does. What he needs to know is that when he sends a request, it's appropriately answered, promptly. Obviously this is easiest to implement as LAMP services.

Another neat thing about the pace of technology is that modern desktops are vastly overpowered for their workloads generally. If they're running an appropriately configured OS that allows such things, idle desktops can become servers on demand, launching virtual machines on demand. With such a system an organization with 100,000 desktops and gigabit networking standards might find itself registering in the Top500 supercomputing sites on accident. Things have taken a strange turn.

It's important when we venture into new fields to do our best to stay legal as best we can, though when travelling in strange lands the laws can be unfamiliar or vague. If you're paying for your server operating systems per virtual host, per physical host, per processor or per client, there are some accounting issues to deal with. If you stick to the free stuff and don't need support you're fine because your free server license covers unlimited cores, processors, servers, RAM, storage and clients. If you're using free operating systems with paid support, your sales rep will be able to negotiate support coverage for your dynamically scalable use because three hundred potential scaled clustered servers require no more support than three and a site support contract for unlimited incidences will be readily available. Commercially licensed software? Not so much. There's no way you'll get a certification in writing from some commercial software vendor that your cross-platform multiuser compute cloud that launches virtual machines on demand complies with your license paid. That's just not going to happen. If you try to go that route you could wind up in an Ernie Ball situation, which could be a resume generating event: http://news.cnet.com/2008-1082_3-5065859.html

0
0
This topic is closed for new posts.

Forums