Feeds

back to article Software apps and the server room

As mentioned in last week's Reg reader server management poll results, one of the curses of the IT industry is the ease with which we can remove the working context between things and talk about them, as though they exist in isolation. Pundits, analysts and anyone with a point product to sell do this with ease. But it doesn’t …

COMMENTS

This topic is closed for new posts.
Gold badge
Pint

Storage. Storage. Storage.

I think the biggest thing that could be improved in our server room is the purchase of a decent SAN. (Indeed, this is something I note as a “nice to have” upgrade in many shops teetering between “S” and “M” in the SME acronym.) SANs are expensive. Or rather, they were expensive during our initial datacenter design about two years ago. The prices are coming down on entry level gear, yet like VMWare’s management utilities they are not quite within reasonable reach of the lower-to-middle class of organizations that fit in the “SME” bracket.

At current, we have sever ESXi boxen deployed, all of which were fitted with RAID controllers, and local storage. Had we deployed these all at once, we would never have gone this route, as that money could have been spent on a SAN that would have made life so much easier. Sadly, as with many datacenters, capacity demands grew organically to the point that we now have 25 physical 2-socket servers in 4 cities all with HW raid cards to provide local storage for ESXi.

Now don’t get me wrong, ESXi is grand, and the ability to slice and dice makes life very easy. The dang 20Mbit/sec cap on moving VMs on and off of ESXi 4 using the VI Client is a huge pain the [censored] when one of the VM Servers has a lie down.

If each site had a SAN, this would be far less of a problem, as the actual “processing boxen” would be completely disposable. ESXi-01 ate itself? Okey dokey; remap it’s LUN over ESXi-spare and fire up the VMs.

Larger organizations have this…and better. They can afford the magic software that makes it all work seamlessly. For organizations stuck doing this “by hand,” even a basic SAN is a huge help. (To the pedants: shut it. I do realize that to do this literally “by hand” here I would have to be flipping bits with a magnet.)

You might ask why a company with 25 Virtual Servers, (and far more than that in VMs) can’t afford SANs and VMWare magic crystals and such like. Funny story that, let’s round it up to “CALs” and “the business we work in is horrifically low margin, yet amazingly computer intensive.”

So any SAN vendors out there want to make a huge friend for life, give me 4 SANs, and I’ll fill ‘em with drives. No takers? Well, worth a shot…

1
0

Servers and Management

Looking from the budget-constrained SME front, a major issue is that most of the management tools available are not financially viable options. Example: 4 sites, 50 users, 3 IT staff, $100,000/year budget.

Virtualization can help with physical space, thermal, and power management concerns, but still presents certain challenges of its own. We run 4x dual quad-core CPU virtual servers for Head Office, and 3x dual dual-core CPU servers per site (total 16 servers, 32 CPUs). Virtual servers run VMware ESXi 4 using hardware RAID of DAS (7,200RPM near-line for system/templates, 10,000RPM near-line for active VMs), and with their dual-onboard NICs Teamed for throughput and failover. These servers combined host approximately 65 resource intensive (50 for users, 15 for servers) and 10 low usage VMs in Head Office, and approximately 8 resource intensive (primarily production-oriented) and 3 low usage VMs per site. The design is to have a “hot spare” virtualization server within each network segment in order to take over from a failing/failed server. The major challenge therein being the cost of high availability features such as VMware vMotion (which to be used requires shared NAS/SAN storage).

First, evaluating the cost of implementing a redundant shared storage architecture based on the requirements cited above. While we could probably do what we need with a series of 8-bay iSCSI boxes per site, you are still likely looking at about $65,000-75,000 just for the initial investment in the iSCSI boxes alone (without any disks, and based on 8-12 for Head Office plus 4 per site). While this may seem a lot at first glance, when you look at the resource requirements cited above, and add redundancy as part of the shared storage infrastructure requirements, it really starts to add up. This does not begin to start considering how many disks of what capacity/speed are required per site for the given VMs they are serving up.

Second, evaluating the cost of properly license VMware vSphere Advanced (lowest level with vMotion included). Doing this with just 1 year of “Gold Support” for the cited servers would run over $100,000 (>1 years budget).

From a non-virtualization standpoint, there are the costs of systems management tools for the datacenter. This ranges from hardware like IP KVM switches and OOB management cards with IP KVM capabilities (some of which offer the ability to boot from remote media) to software such as Microsoft System Center (and all of its various additional modules) and VMware’s vSphere as mentioned above, amongst numerous other proprietary software suites.

An IP KVM switch starts at $1,000+, while OOB management cards tend to be a few hundred dollars each. Then you get into the realm of software management tools, where you have to pay both the multi-thousand dollar initial costs for the base software, and then the additional per managed device license costs on top of that.

1
0
This topic is closed for new posts.