back to article What does a server environment look like anyway?

Stock photo companies have got a lot to answer for. For most people, the phrase ‘server environment’ generally conjures images of sleek racks of equipment, all glistening chrome and black with just the suggestion that the equipment requires no management at all, or if it does, it will be conducted in some place far away like the …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    how could the photos be taken?

    Never heard of Photoshop?

  2. Anonymous Coward
    Thumb Up

    We have already "virtualised" once..

    when we went from 5 rows of HP K and N classes to 2 x partitioned PA-RISC Systems about 6 years ago. Freed up an awful lot of floor space, but it still took us 4 years to get rid of the old hardware offsite. (Even now I still have a dead K class occupying 2 floor tiles.) Phase II of virtualisation is for Windows and is now the corporate standard for new builds.

    Also, we are moving everything to one end of the Data Centre (Known affectionately as "Operation Budge-Up"). Apparently we can get a tax break on the large amount (2/3rds) of contiguous unused floor space (though personally I think the accountant has been sniffing too much correcting fluid).

  3. Anonymous Coward
    Anonymous Coward

    Of course they exist....

    right-thinking companies bring in the proper contractors / vendors to do the installs. You ever want a real cabling job done properly, bring in the guys who put the racks together in the factory.

    I used to work for a services division and we regularly brought along the "grunts" to do the job well and they never failed to satisfy the architects' exacting standards.

    Most other datacentres, the kit is just thrown in racks mindless of weight distribution, cooling and power requirements with any cables they have lying round.

    Lack of discipline in a datacentre is just asking for trouble.

    Oh and BTW, Sun also produce x86 boxes - obviously you meant SPARC.

  4. Pirate Dave Silver badge
    Pirate

    Oh sure...

    the datacenter used to look pretty -- 8 years ago when we built it. But datacenters evolve over time. New stuff comes in and needs a hole to be installed into. Old stuff eventually dies and gets removed. Admins get in a hurry some days and don't route cables in a totally anal-retentive fashion, instead just sling them from point A to point B with the FULL intention of coming back later and running the cable properly. Yeah, right. Then the other admins retire or get fired, and you're left with one poor sap who has to know ALL of the systems and do ALL of the work in the server room because, well, he's a sap and keeps letting the higher-ups pile more and more things on his plate, and since it saves them a few salaries they think the world is beautiful, nevermind that the poor sap can't GUARANTEE anything anymore because he's stretched so thin. So the server room gradually gets messier and messier until it becomes the Lair of Death, Dismemberment and Disconnection that it is today.

    Virtualization helps some with consolidation, but isn't yet the be-all/end-all that some think it is. Since there's usually somebody or some project out there who needs their own special server, even with consolidation, the old servers just get repurposed as special servers, so the racks don't change much. Although I have been ripping out the really old servers that I built from parts years ago. There's not much need for an old 400 MHz Pentium-II box anymore, since whatever workload its been doing for 10 years will hardly make a dent in a VMWare host with two quad-core Xeons, so that's an easy migration choice.

  5. Anonymous Coward
    Boffin

    Virtualization for branch offices = WIN.

    The idea is (and what we are working on at my company) is consolidation to the fewest number of physical servers at the branch offices for branch office specific applications and services, and centralizing company wide applications and services at a central data center, which is built much like you'd expect: raised floor with cooling, power, and data run under the floor to each rack. the branch offices utilize a telco-style server room- concrete floor with racks (both two and four post) installed properly, with power and data cabling fed from the top.

    As far as virtualization; the infrastructure at each branch office (domain controllers, print servers, etc) are virtual machines running on a single physical host, whereas mission critical apps that can't be virtualized run on their own hardware.

  6. steven W. Scott

    Pay no attention to the man behind the curtain

    Yes, we have a pristine, camera friendly "data center" with latest and greatest racking, overhead cabling, and environmental controls chock full of high density blades. And then there's the rest of the data center, which occupies the other 2 thirds of the building; a hodgepodge of Fuji, Solaris, Z-series, ATLs, SAN arrays, under-tile mish-mash of cables , Wintel farm, P690 AIX boxes, tape devices and networking equipment, all of which handle the lions share of the processing that occurs.

  7. Trevor Pott o_O Gold badge
    Dead Vulture

    That lasted about a week...

    Ah yes, the "pretty" datacenter. We had one, once. Cobbled together from various different racks, all white box servers, cables run but our in-house handyman, but we took the time to do it pretty. It lasted about a week. Then the first round of server upgrades were needed. A new system or two had to go in. Then UPS overhauls, etc. etc. etc.

    The datacenter, at least in an SME is an organic thing. Constantly changing and evolving. If there is s challenge faced, it is how to manage that evolution. Don't try to forestall it, or reject it, but embrace it, and find the best way to deal with it.

    For us, this turned into "Maintenance Tuesday." The second Tuesday of every month we stay after work a few hours, and do physical maintenance on some part of the datacenter. We reboot or patch Virtual Machines, we swap out aging disks or systems, test the UPSes, you name it. Sometimes we take an entire "Maintenance Tuesday" just to re-wire the datacenter because it got too messy. Our customers know about, they expect it, and we do our best to ensure that no services are interrupted. (Or if they will be, that it is posted on our site well beforehand.)

    Oh, and the other challenge is dead systems. Since our Server Empire is a collection of White box 2 socket servers running ESXi, from time to time one of them explodes a component. Have to keep spare systems around, and have disaster plans in place. (What are the 15 ways you can move the VMs to the other systems effectively? Are you ABSOLUTELY SURE you have the spare capacity to suffer the loss of a virtual server on any site? Is DFSR working on both the primary and backup file servers? Are they in Sync? etc. etc.etc.)

    It’s all grunt work, it’s all not sexy, and because it’s all about “spares” and “backups” you have to fight tooth and nail for every cent of your budget. (That, and I honestly thing that the quality of computer components has dropped significantly over the past ten years. Seriously, failure rates on some of these components are abysmal. You hear that Freeform Dynamics folks? There’s a whole other series of articles for you! What impression to us folks have of component failure rates from different vendors. Not system failure rates from the Tier 1s, but component failure rates from manufacturers.)

    So the moral of the story is: backup, backup, backup. And have a plan for absolutely everything. If it can die, it will, and you had better be ready for it.

  8. -tim
    Boffin

    BOFH or PHY?

    We have a photogenic data center. Its 7 by 4 floor tiles so it could hold 5 racks but currently has 2 and it stays neat and clean The floor is even rated to take the load of a ton per rack. Our big problem is the original design was for vertical cooling from the bottom of the racks when 1 RU servers where getting shorter so our cabinets are only 900 mm deep. The result is we can't run most new 1 RU servers without messing up the cool air flow for other computers. We are building a second site soon and it will have much deeper racks with front to back cooling.

    If I was going to do it again, I would have like to put in more -48V DC and a heat exchanger. 50% of the time a simple heat exchanger would be more efficient to cool the room than the split systems we use today.

This topic is closed for new posts.

Other stories you might like