This week’s poll spawned some very interesting responses. We asked you where you keep your servers, and how these relate to the kinds of issues you face. Looking at locations first of all, you told us that in the majority of your organisations the machines that form the backbone of IT operations now reside in one or a small …
Missed that survey
I'm pretty good about keeping the reg up to date on what we do around here.
We have 4 active datacenters in this building housing servers. Including physical and virtual on small-iron boxes we've got about 3600 servers. We have a few other small datacenters around the country, but mostly those are for nothing more than user authentication and personal file storage, and some servers handling telephony systems; all our operational servers and business data are in a single location. We have 10 mainframes (z7 - z10) and a few 595s to go with them.
Now, as for those datacenters, they're all large, fully built out, raised floor environments, with dedicated air handling and power systems, what people typically think of as a "datacenter."
However, having systems in 4 rooms is logically no different from having them in a single room... They're all on the same backbone network, all have out-of-band connectivity to operational systems, and regardless of the room everything is supproted by 2 completely independent power systems, 2 seperate ISPs, and fully redundant network and SAN connectivity across seperate cables and switches, and essentially is one big pool of systems. A system failure on the rack, row, datacenter, or even building level will not cause an outage.
Data growth is an issue for us, but not because of data storage, its limitations in backup infrastructure... Oance you get to a point of using large scal Tier 1 storage systems, dedupe is inherent. Also mainframe virtualization uses single binary images for multiple systems so datagrowth there is limited. Database backup/replication is also not a challenge. The challenge is system level and file data backup, and managing legal hold, HIPAA, and SOX required data backups and archives. We have dozens of rack rows of nothing but IBM tape chassis for TSM. Actually, getting the data to tape is not the issue, its recovering a system... The sheer number of tapes required to restore a single system using TSM's backup methodology is rediculous (master once, incremental forwever is a BAD idea, really bad, but mastering all these systems on even a monthly rotation would nearly tripple our tape load.) We have plans to move tapeless (for internal recovery, resorting to tape only for archive) but its a more than $10M deployment, and with a big Win2K killoff in process, it was not in the 2009-2010 budget...
Please mentally insert a long and witty comment here that is somehow appropriate and relevant to the situation. I've had but 4 hours of shut-eye, a file server just ate itself, and one of the ESXi boxen just blew a DIMM. Oy, and the coffee's not even made yet!
Counting the minutes until pub O'clock...
- Apple stuns world with rare SEVEN-way split: What does that mean?
- Special report Reg probe bombshell: How we HACKED mobile voicemail without a PIN
- RIP net neutrality? FCC boss mulls 'two-speed internet'
- Sony Xperia Z2: 4K vid, great audio, waterproof ... Oh, and you can make a phone call
- Pic Tooled-up Ryobi girl takes nine-inch grinder to Asus beach babe