back to article Because the server room is certainly no place for pets

Legacy IT is toxic. Virtualisation is the default choice for new data centre deployments, but for existing and legacy workloads, justifying hardware refreshes is often difficult. Shedding light on the often poorly-accounted-for costs hiding in your data centre can provide sufficient rationale to move your infrastructure forward …

  1. Mage Silver badge
    Thumb Down

    Tape and VM?

    If you need to read old tapes, the drive may not work on a VM.

    Transfer the tapes now to HDD stored off line or new format tapes, depending on size.

    VM may not exist for other old platforms.

    The old CPU HW may be for other hardware I/O. Little other than high level ports and networking is available on VM. By definition a VM is unlikely to be able to access strange I/O or HW.

    This is a pointless article. We all know what a VM can do and advantages of it.

    Explaining what the limitations of VM / Virtualisation are might be more useful.

    1. This post has been deleted by its author

      1. Ben Tasker

        Re: Tape and VM? - LMFTFY

        This is a pointless article with baseless scaremongering

        'Toxic IT'? Seriously?

        Not everything can be efficiently virtualised. JIRA is a (reasonably) popular enterprise app, and it _can_ be virtualised (in the sense that it's not impossible), but the problems you invite by doing so can be potentially myriad (especially if you've got vmotion set up). If your business relies on a tool being available, why take that risk?

        Virtualisation is a tool, it's important to understand when to use it and when it's not appropriate to do so - that's going to change on a case by case basis, so there aren't really any blanket rules on what should be virtualised.

        It's also equally important to ensure non-technical managers understand that just because you could run all those 'toxic' servers as VMs on a single host (to reduce costs), it's not necessarily a good idea.

        The phrase 'Toxic IT' sounds like the garbage you might hear come from a marketing dept, not from an educated professional.

        For the record, I definitely wouldn't fall into the 'old' category either.

        1. Fungus Bob

          Re: Tape and VM? - LMFTFY

          "This is a pointless article with baseless scaremongering"

          How else are they gonna prop up sagging sales?

    2. Random K
      Devil

      Re: Tape and VM?

      >This is a pointless article. We all know what a VM can do and advantages of it.

      OK, I'll play devil's advocate here. While I get your point about legacy I/O, even KVM supports PCI and USB pass-through with some effort. I would think that would cover a fair bit of legacy kit (excluding ASI cards, proprietary daughterboards, and other such horrors which couldn't even be physically connected to more modern kit). I once had a VM talking to an ancient PCI Fax modem for example. No idea why, but it was the only card I could ever get to send reliably through our supposedly 40+ year old telecom wiring.

      None of that gets to the real point of the article though. Keeping aging servers and applications around can (and likely will) end up costing way more than a rip-and-replace. I know how awful those experiences can be first hand, but isn't keeping stuff around that's difficult to impossible to replace/repair the IT equivalent of playing Russian roulette? If you're an old hand who knows that box inside and out that's great, but what happens if you get hit by a bus? I guess I just read the article less as "virtualization will solve all your problems" and more "people have come to expect access to modern tools for backup/migration/failover so get your old shit in order". Do you want to deal with the disaster when lady luck decides, or do you want to at least get to plan for the pain of migration?

  2. Roger Kynaston

    Hrmm

    I remember a 20 year old Ultra1 running some business critical system. Over and over I said it was a huge risk running an unsupported version of solaris blah blah. The answer, we don't have time to upgrade/migrate/decommission and it would be too expensive.

    It never actually fell over while I was there but it was no fun being the only one left with some knowledge of Solaris 2.5.

    The challenge is often getting the business to recognise the costs and risks with legacy stuff rather than precious old timers guaranteeing their pensions.

    1. Wzrd1 Silver badge

      Re: Hrmm

      "I remember a 20 year old Ultra1 running some business critical system."

      As bad as an old DEC Alpha server I was called in on.

      Documentation on configuration, absent, taken when the company sacked their quite efficient IT guy.

      High end external SCSI RAID, main controller blown and replacement no longer available.

      Legato backup in use, configuration unknown and if you know Legato, it possessed nearly infinite configuration options.

      In the end, the sacrifice of the ram was insufficient, a few prime bulls also failed, 7 virgins and my sanity were the final required sacrifice to reverse engineering that Heath Robinson, well, it's a bit more modern and hence, it's a Rube Goldberg monstrosity.

      All to then port everything over to our more modern confidence job product, a commodity motherboard inside of a pretty case and called a server, with "dual mirroring", meaning RAID 1.

      By the time my company was done with that company, they'd have been better served by retaining their IT guy.

      But, those were my formative years, while I was learning my skills of a BOFH.

      Today, it's likely I can give Simon a run for his money, departing afterward *with* his money.

      After all, today, I'm in information security, on the technical side. I'm the guy who can figure out a result of policy in an AD environment of 1000+ OU's, in a forest with an equal number of elements and if given enough paper, accurately display the propagation results.

      Then, move on to "Why in the hell are *those* on the same VLAN as the operational, internet facing hosts? Place them on a VLAN configured *so*, with access to *these* hosts only and segregate those hosts *this* way, giving configurations on the fly that work.

      Those who oppose me are typically sent to explore alternative employment options, largely because the local swamp is now full and they closed the local land fill.

      Still, we're moving offices. Currently, we're near the crab capitol of the land. Later, I'll be in alligator country. Opportunities for advancement abound.

      Now, just *where* did I put that mold of the alligator heat? I'll need that for the 3D scanner and printer, to ensure a proper fit for the lasers.

  3. Velv

    "but actual IT people trying to justify their lack of modern skill sets"

    Remember the old motto... "If you're not part of the solution, there's money to be made prolonging the problem"

    1. This post has been deleted by its author

      1. Anonymous Coward
        Anonymous Coward

        I find the implication that IT old timers, for want of a better word, insist on doing things the traditional way simply because they are out of touch with modern concepts, don't want to change, have fond memories of obsolete hardware, etc, extremely insulting.

        We built systems 30 years ago that are still running. We understand the values of build quality, reliability, and future manageability that are sadly lost on a lot of people today. We didn't write code just ten years ago that relies on IE 6 and is a nightmare to port forward.

        Hear hear. Some of us even started with this weird notion of interoperability - even if you can't pronounce it after a few beers, it's still as essential now as it was then. Heck, I built stuff 15 years ago that pretty much still supports a whole country (at which point several people suddenly realise who I am and what I'm talking about :) ).

        1. Wzrd1 Silver badge

          "Heck, I built stuff 15 years ago that pretty much still supports a whole country (at which point several people suddenly realise who I am and what I'm talking about."

          Telephone networks are a lot different from real IT.

          <ducking>

      2. Barry Rueger

        No really, wire recorders have a much warmer sound.....

        Ah yes, spoken with the authority I once heard from a radio engineer* who steadfastly refused to swap out ancient reel to reel tape decks** for computer audio editing and serving. Or even to develop a plan to replace thirty year old mixing consoles (Penny & Giles faders in a McCurdy) for brand new ones that had actually been purchased.

        Or the guy who claimed repeatedly that it was not possible to eliminate the "Buzz" in the audio chain that happened every morning at 10am. A subsequent engineer finally untangled and removed the mound of cabling behind the mixing board (3 feet wide x 2 feet deep!) and voila - no buzz.

        I tend to stay one generation behind the latest and greatest in software and hardware, and have been known to rescue some pretty old machines, but I also understand that there comes a time when it makes no sense whatsoever to hold together old technology.

        "We built systems 30 years ago that are still running" always sounds to me like "We've got a box of thirty year old components, some of which aren't manufactured any more, and the balance of which have been repaired several times."

        * "Engineer" in North American broadcasting is not someone with an actual engineering degree. It's the guy who fixes stuff around the radio station.

        ** Actually, two very ancient and knackered Ampex machines were eventually stolen from deep storage. The insurance settlement bought four new PCs with audio software!

        1. Tim99 Silver badge
          Joke

          Re: No really, wire recorders have a much warmer sound.....

          Actually, two very ancient and knackered Ampex machines were eventually stolen from deep storage. The insurance settlement bought four new PCs with audio software!

          OK, BOFH, so what did you do with the ancient Ampex machines then?

          1. Wzrd1 Silver badge

            Re: No really, wire recorders have a much warmer sound.....

            "OK, BOFH, so what did you do with the ancient Ampex machines then?"

            In high school, I actually serviced two inch Ampex videotape machines. I *loved* the performance of the machines, today, it's sadly and laughably obsolete.

            But, I ponder what we could do with that technology with today's technology, just as a "science experiment".

            Those tapes had storage capability that is only available today and could store *more* than HD for every centimeter of tape.

            At that time, I also repaired every television in the school, the equipment being hybrid vacuum tube and transistor technology, hence having to learn both. That all was extra-curricluar, a personal hobby.

            That said, high school was a disappointment, as our junior high school had electron microscopes and an observatory. They were lost when I was in high school.

            When our children went to the same school, chemistry class was consisted with terminology and M&M's (seriously).

        2. Wzrd1 Silver badge

          Re: No really, wire recorders have a much warmer sound.....

          "I tend to stay one generation behind the latest and greatest in software and hardware, and have been known to rescue some pretty old machines, but I also understand that there comes a time when it makes no sense whatsoever to hold together old technology."

          Same here. I'm also fully qualified to be an engineer for such systems.

          Frankly, I'd gin up a one farad capacitor at 400 volts, then hand him the leads, one to each hand.

          I also go back to when that described device was the size of a small dumpster.

          Regrettably, it's unlikely any could afford my rates today.

      3. Destroy All Monsters Silver badge
        Windows

        I'm conservative as anyone, but...

        "cloud computing, HSM, virtualisation, all stuff that's been around for decades under a different name"

        Not it hasn't. Pretending otherwise is not a service to anyone and is the general region of "IT creationism".

        The situation where IBM had "virtual machines" back when on extra-expensive hardware built from discrete logic and triphase current converters where you could admire the 256 KByte RAM usage indicator on a terminal screen and sip coffee while the next program was spooled from a magnetic tape is NOTHING like today.

        Or maybe a WWI infantry charge is just like a modern tank brigade plus air support coming at you. Who knows.

        1. Wzrd1 Silver badge

          "Or maybe a WWI infantry charge is just like a modern tank brigade plus air support coming at you. Who knows."

          As one who has both modern military experience that is still current *and* current, bleeding edge IT experience, we'll suffice it to say, I understand both viewpoints and you do not.

          The tank brigade would be countered with covered pits, for the infantry to engage inside of the blind zone of the tanks, within danger close of the tanks from the air support.

          The IT addressed similarly. I've done the latter, cutting loose the hopelessly obsolete and overly expensive, protecting the worthwhile, pending porting to a more modern and more inexpensive, but effective platform.

          So, I'll say, I can out-script you in the environment of your choice, I can outfight whatever military force you choose. I can out-fart you, but then, I'm a lot older than you. Arthritis is also an issue, but it's more an annoyance that triggers adrenaline and increases strength.

          The latter being of note by my ability to throw a 20 stone man into an object within two meters and change and disable that man.

          Now, I'm not suggesting violence. I'm suggesting a certain specific experience set that you badly lack.

          I've worked with ancient IT, I work with modern IT. I've worked with specialist warfare teams.

          I far prefer the IT environment, ancient tech present or not.

          I know how to handle ancient and get it ported to more sustainable technology that wouldn't bankrupt the Crown of England.

          In a couple of decades, you just *might* accomplish that, if you learn to actually learn old and new things.

          I'm also proficient with vacuum tube technology, germanium transistor technology and VLSI technology, working with all of the above.

          So, I'll suggest you do one thing: "Open your mind".

          For, closed minds are the path to extinction.

          Still, it's only a suggestion.

      4. Wzrd1 Silver badge

        "Just because we're not enthusiastic about some new fangled thing that isn't really new at all, (cloud computing, HSM, virtualisation, all stuff that's been around for decades under a different name), doesn't mean we don't understand it. It just means it's crap and better avoided."

        True enough, I'm nearing my mid-50's and realized that as well.

        That doesn't mean that every operation requires an AS-500. Mainframes have been relegated to payroll and hospital patient tracking and are beginning to be phased out of the latter.

        :P:p:P:p:P:p

        Damn it! Stop trouting me! ;)

        More seriously, that 20 year old server *is* obsolete. Hell, I'd not even keep one of those in my garage.

        And I'm the guy with a Cisco Catalyst 4000 and Dell PowerEdge 2850's in the basement. Running.

        I'm on the local electricity company's Christmas card list, with a special gilded card.

        Note to self: Join the pen test team for the electricity company at the first opportunity. One exponent needs to be deleted from the bill.

    2. Stevie

      Remember the old motto

      Or indeed the new unspoken subtext: It doesn't look like C, I'm not touching it!

      The "new breed" are a bunch of scaredy cats who can only work with one OS without losing it and freak out at the sight of anything they don't recognize.

      Give me an old git who just shrugs and gets on with it every day.

      1. Lodgie

        Re: Remember the old motto

        This has cheered me up no end. Speaking as one who cut his eye teeth on a venerable IBM 360/20 (Google it lads) we were more interested in neatness and reliability than new bright shiny things; this applied across the whole spectrum of IT from Junior Operators to Sys Progs. There was a lot of love, mutual respect and a feeling that we were all blessed and honoured to be working in the world of computing.

        Systems weren't thrown together, they were lovingly crafted. observing carefully defined standards which made support and upgrades a breeze, these were tested, stress tested and parallel run for an age before implementation and when they were implemented, everyone including the company cat stayed up all night to monitor the first batch run. This doesn't seem to happen now, a dash of alpha and beta testing is done and then the app (in the modern parlance) is slung screaming and writhing at the user who unknowingly runs the live test, It's all gone to shit.

        I am now old and by the definitions of this article, a wizard and that makes me proud.

        1. Matt Bryant Silver badge
          Thumb Up

          Re: Lodgie Re: Remember the old motto

          "....a dash of alpha and beta testing is done and then the app (in the modern parlance) is slung screaming and writhing at the user who unknowingly runs the live test...." Welcome to the World of "agile"! The real worry is that people have become so blase about "the underlying platform" that they think they can develop a system the same way as they now try to rush through coding, with quite hilarious (and foreseeable) hardware interoperability issues. It's always fun when the truth dawns - whilst modern OSs have great virtualisation tech, not everything will run in a VM the same way as it does on it's own bit of tin!

          1. Wzrd1 Silver badge

            Re: Lodgie Remember the old motto

            "...they can develop a system the same way as they now try to rush through coding..."

            That's *why* we have *BSD.

        2. Wzrd1 Silver badge

          Re: Remember the old motto

          "Systems weren't thrown together, they were lovingly crafted. observing carefully defined standards which made support and upgrades a breeze, these were tested, stress tested and parallel run for an age before implementation and when they were implemented, everyone including the company cat stayed up all night to monitor the first batch run."

          Later, Microsoft came along, ignored bounds testing and the ping of death ensued, revealing deficits.

          Such deficits still run amok today, courtesy largely to Adobe and a bit less often lately, Java.

          But, we still find the occasional 20 year old bug from hell.

          Carefully defined standards relied upon enforced upon standards. That has proved relatively absent over the years.

          Lovingly crafted and tested by pen test teams is the rule of today and that is always improved upon.

          Or do you honestly want a modern pen test team attack your "lovingly assembled" device or cluster?

      2. Wzrd1 Silver badge

        Re: Remember the old motto

        "Give me an old git who just shrugs and gets on with it every day."

        This BOFH Mk II is available, for the proper price and benefits package. :)

        I've worked on and with equipment that booted, early POST test era self-test, file system consistency testing that took long enough that I literally glued an antique wind up clock key onto the server face to relive the tedium of whoever was the poor soul dispatched to reboot it, to working at a Fortune 200 company.

        Where I have been horrified to find a few pet abominations.

        Hmm, what would the BOFH do and how to improve upon the action?

  4. Anonymous Coward
    Anonymous Coward

    It keeps running

    Cause it would cost to much to replace it.

    I worked for a company that had a couple VAX's and Alpha's still running up until 2012. I think they finally got off of them right after I left because it cost a ton of money to fully replace them. Some places do not have that kind of money so it continues to live as long as you can find parts on Ebay and the sorts.

    1. This post has been deleted by its author

    2. Anonymous Coward
      Anonymous Coward

      Re: It keeps running

      "a couple VAX's and Alpha's still running up until 2012. I think they finally got off of them right after I left because it cost a ton of money to fully replace them."

      And back in 2012, that maay have been the right decision, with HP doing its very best to get rid of VMS. Though even in 2012, most of the hardware-related issues could be avoided by 'virtualisation' (the acceptable name for running VMS on a VAX or Alpha emulator on a more modern box, typically Wintel because that's what the IT bods understand, but not necessarily Wintel e.g. you can run the free, open source, SIMH emulator to emulate a VAX under Android. On a modern phone you can probably still exceed the performance of an early 1980s low end VAX e.g. a 730. Entertainment only. For a production application you might want SIMH on a Raspberry Pi 2. Maybe.).

      'Course the rules may have changed somewhat a few months back when in a surprise announcement the future development and support of VMS moved outside HP (specifically to VMS Software Inc). A port to x86-64 has already been announced.

      "it cost a ton of money to fully replace them"

      It often would. Sometimes the devil you know is the better option. But not always. It helps to be able to make an *informed* decision, rather than just following the IT crowd.

  5. This post has been deleted by its author

    1. Lee D Silver badge

      I'm a firm believer in "If it ain't broke."

      The problem comes from any sort of contingency plans, however. You always have to think "What happens if?". The scope of that varies based on the need and criticality of the system, but that often involves the question "What if everything goes and we have to start again from bare data?" If the only answer is "We have to rebuild it exactly as was", then that's a risk. The chances are you may not be ABLE to do that in the future, even on an existing supported system.

      It's not a lack of skills, necessarily (though I've witnessed that too, and been brought to workplaces where nobody virtualised because nobody knew it was possible!), but a lack of foresight.

      The problem also comes from "migration". That suggests, in computer terminology, a "move". Removing what you have, putting it elsewhere, losing the original. That's a STUPID migration strategy.

      But there's nothing stopping a co-existence between the systems while wrinkles are ironed out. That's the PROPER way to do anything. Run both systems in tandem, actually feed the live data into both even if only one actually gives the outputs to other working systems. Check that both systems handle the same things in the same way within the same timeframe and are equally reliable before you THINK about switching the old one off. In that circumstance, there's no reason to avoid such migration.

      However, virtualisation has its advantages to virtually (sorry!) every user. It's just a matter of getting there. I've yet to see somewhere that wouldn't benefit from virtualisation, to be honest. They may not want it, it may not be justified cost-wise against their existing systems, but its incredibly hard to find somewhere that wouldn't see the benefits.

      And, sorry, but IT moves fast. Although I'm a stalwart and constantly get ribbed for holding back on new technologies, both personally and professionally, you do have to move on at some point. And it's at that point that you'll wish you'd virtualised years ago. Virtualisation is as old as the hills, itself, precisely because it's such a wonderful and established technology.

      Migrating a large physical system is a scary prospect, but it's like emulation - we do this to preserve the system, to provide rollback, reproducibility, guaranteed knowledge that it's a working system once we get it up, and that we can get it up on any hardware that passes our way. The move from physical to virtual is horrendous, scary, prone to error, etc. But you never hear of people going back from virtual to physical systems, or struggling to move or upgrade their virtual systems around once they are there.

      Virtualisation is a technology that won't die yet, and for good reason. Stay on your old systems as long as you like. They'll work. But run them on modern, supported, warranted hardware that you can get hold of in a jiffy and doesn't cost a fortune to support. And then when you want to upgrade, you can, safe in the knowledge that the old system is only a rollback/checkpoint away - exactly as you'd left it. Hell, even exactly as you left it the last day it was a physical system, if need be.

      1. This post has been deleted by its author

      2. Anonymous Coward
        Anonymous Coward

        Err... can you connect real I/O ports to a virtual machine ?

        Virtualisation is good in only some cases where you don't need dedicated I/O but just run office type software. It does not work where cables to the metal are an absolute necessity. In a lot of those cases the legacy hardware was supplied with the machine-tools and as long as those tools run that hardware will be in use and repaired as necessary.

      3. DN4

        > I've yet to see somewhere that wouldn't benefit from virtualisation, to be honest.

        I suppose you don't work with systems that interact with control/regulation systems, measurement instruments, ...

  6. Jay 2
    Unhappy

    To be honest I'd rather not be looking after kit that's ventured firmly into legacy territory. I've done the buying spares off eBay thing and I've also done the "cross fingers and sacrifice chickens" when it comes to even looking at certain boxes. It's not a nice experience when you have to try and get things working again, when you've been pointing out for months/years that the kit really needs to be refreshed/replaced and management are breathing down your neck asking when is it going to be up every few minutes...

    Nowadays I'm using nothing but Linux day-to-day, but every so often a Solaris or HP-UX box resurfaces to which I have to regress to a past life ( also occasionally dusty manuals and seach engine of choice) in order to do something. This is usually worse if it's not a box you set up in the first place, so you have to go in blind.

  7. Keith Langmead

    Hanging out with the wrong people!

    "People often romanticise legacy IT. Sys Admins fondly look at that old Compaq Netware server"

    What kind of Sys Admins have you been hanging out with? I don't know a single one who looks fondly on old legacy kit, they / I might accept that time / financial / logistical constraints may prevent everything bring brought up to date as quickly as we might like, but in an ideal world where money / time / resources were no object I think we'd all prefer to be working with and maintaining up to date systems.

  8. Anonymous Coward
    Anonymous Coward

    What matters most of all...

    ...is having a workable plan for dealing with outages. No plan = commercial loses when something breaks.

    Virtualisation has some useful features to support such a plan, but it's by no means the only way of doing it.

  9. AndrueC Silver badge
    Boffin

    The article mentions the issue of SCO or Novell servers. Virtualisation will not solve that particular problem. They will still be running an ancient OS that no-one 'cept old Bob understands after you've virtualised them. Virtualisation is a hardware solution and as far as ancient servers are concerned it only means you don't have to worry (so much) about sourcing IDE drives or a replacement motherboard that the OS can still run on.

    Although I'd still want long odds on someone being able to virtualise a Novell or SCO server in the first place :)

    1. mr_splodge

      "Although I'd still want long odds on someone being able to virtualise a Novell or SCO server in the first place :)"

      Perfectly doable. I've virtualused s SCO openserver 5 application back in around 2007 before there was anything like vmware converter.

      I just did a fresh install of the OS into a blank VM and moved the application, it's database and configuration files across. It wasn't too tricky at all really, took a couple of days of hacking around with it, even with no documentation or support.

      The only problem was I could never get the OS licence CALs to work so we were stuck with the 5 you get by default. Thankfully that wasn't an issue because as a legacy system it wasn't used by many people in the business any more.

  10. Anonymous Coward
    Anonymous Coward

    I'd start with avoiding proprietary storage

    I'm the first to agree that migrating away from legacy is a sane thing to do from both a continuity and maintenance perspective, but that requires first of all that you actually CAN.

    If what you're running has gradually become a black box due to provider creativity or by exiting skill sets you really must get your act together and move that gear, because if it dies, it dies for good, and whatever you were processing dies with it.

    I've seen this with accounting facilities, billing and in one case pensions. I ended up helping a company to port its pension management off a mainframe to a mini because the provider of the pension software stopped supporting the mainframe version. I think that if that had not happened the company would still use the mainframe today. The only way to do that, though, was by emulating the mainframe on the mini so all of the mods would still work.

    (an unsupported pensions management system is unusable because company pension management is absolutely buried under legislation, so updates are mandatory just to remain compliant. It's almost as if they do it deliberately)

  11. Ken 16 Silver badge
    Trollface

    Let the man talk

    He's invented a marketing term 'Toxic IT' which can underpin a business case for transferring working IT systems to a virtual environment giving work for infrastructure people, application developers doing compatibility work, hordes of test teams...

    OK, you'll wind up paying more for a system i partition on an AIX server than to keep your AS/400 running and your old backup tapes won't work, plus you need to upgrade versions of every piece of software and any savings will be passed to your hosting environment and reflected (slightly) in next years contract renegotiation BUT it's in a good cause.

    Next week...LIMPET IT (TM pending) when you've stuck all your old IT in VMs and now they're so non-standard that no cloud provider can host them.

    1. Sooty

      Re: Let the man talk

      You jest, but a scary sounding way to refer to the old stuff you want to replace is quite often the only way you can get the non-technical decision makers to sit up and pay attention to what you are telling them, and get them to open their purse strings to actually do it.

      I might start referring to our old, out of support, software that we are trying to migrate to a newer version of as Toxic.

  12. NotBob
    Coat

    Give a kid a hammer

    If you give a kid a hammer, everything looks like a nail.

    Seems like someone just gave the author some virtualization software.

  13. John Klos

    This reads like a sales pitch

    While it's true that saving power and using newer hardware are generally good things, virtualization for the sake of virtualization misses the point. It attempts to move the risk of hardware failure across more cheap, replaceable machines in lieu of caring about having and maintaining good hardware. Kids these days don't remember the days when we bought good hardware that could literally run for decades without problems (VAXen, for instance, just don't seem to die on their own), but there are definitely situations where fewer reliable systems are much more appropriate than a boatload of cheap x86 machines. While I don't disagree with this article generally, moving to modern can, and often has, lead to a dead end.

    Many Windows-centric IT staff have moved from legacy Unix and VMS systems to Windows. Now where are they? They have an OS which can't be reinstalled without licensing issues, applications which must be installed from their original installation media and can't be encapsulated, configurations who's hardware have to stay precisely the same else the house of cards will come tumbling down. This is often actually WORSE than how things were with the legacy systems, and this article is a good answer for those dead-end Windows "solutions". But not learning a lesson from doing it incorrectly is almost worse than doing it incorrectly in the first place. Virtualizing Windows to deal with some of these issues belies the point that we (meaning IT people) shouldn't be heading down these dead ends in the first place.

  14. User McUser

    Actually, DON'T Virtualize it.

    Virtualizing some ancient machine will only bring forward all the stupid software issues that exist in the old version of whatever OS it was running, meaning you'll still need the longest-toothed IT gal/guy to stay on and manage the damned thing.

    Better instead to migrate the service to a new platform than to keep dragging along 20 years of legacy BS and unpatched vulnerabilities.

    1. Number6

      Re: Actually, DON'T Virtualize it.

      I still run an OS/2 VM because I have yet to find time to re-write a bit of software that runs on it. It's on the to-do list but so far things are working OK and there's no urgency, apart from thinking of some new features I'd like to add to it.

  15. Anonymous Coward
    Anonymous Coward

    So much bollocks spouted in the comments

    The article makes a good point. Obsolete hardware running obsolete operating systems, protected from any kind of progress and renewal from the protectionist rackets run by the doddery old grey beards, are a bloody liability.

    Give me a well built virtualised platform, under pinned by enterprise class storage, network and server, on a supported o/s any day. Bake in a good backup solution, robust software and configuration management tooling, plus a degree of automation and you have a platform that will be easy to support with skill sets available to do so.

    Take your ancient crap and shove it up your arse.

    1. Mark 85

      Re: So much bollocks spouted in the comments

      That's all well and good for you in the here and now. Get back to us in 10 years when time after time of your asking for upgrades and modern systems and equipment which management turns down as "it costs too much" or "what we have is working" fine. You might want the new and shiny and modern stuff, but you won't have it and will be dealing the crap just like the rest of it. Reality is a bitch.

    2. Anonymous Coward
      Anonymous Coward

      Re: So much bollocks spouted in the comments

      AC, I assume you have several hundred million euros to spare and can afford to shut the factory down for over 12 months while everything is ripped out and all mew machinery is installed and the workers are retrained to use the new equipment. Or is it that you are still wet behind the ears and have never been in a real world manufacturing situation.

      What you are talking about will work most of the time in an office type of situation where there are regular upgrade cycles, it will not work out on the factory floor where machine-tools are expected to have a minimum life of 30 years and that includes all the hardware that controls them.

      If NASA followed your short sighted plan then there would be no probes leaving the solar system and not many satellites in orbit still working.

      1. Disko
        Thumb Up

        ....all mew machinery

        Just upgrade all legacy kit with Hello Kitty stickers and your loved pets are as good as mew =^..^=

    3. Solmyr ibn Wali Barad

      Re: So much bollocks spouted in the comments

      "Give me a well built virtualised platform"

      Hah. Dream on.

      While we're at it, can we finally have those flying cars and frikkin-sharks-with-frikkin-lasers? Plus some nice unicorns for the tenderer species in the profession.

      And, if it's not too much trouble, a new box of bingo cards, because this topic has been quite taxing on the supplies. 15 instances of the word "legacy" in a short article...there should be a law against that, or at least a fair warning for the unsuspecting readers.

  16. Imprecator

    That was my Datacenter 5 years ago. I took a whole bunch of old Peoplesoft ERP servers (some of them running Peopletools 7.5) and virtualized them using VMWare ESX.

    I still got a couple of old "pets" around, simply because I have spare parts. Part of the project too, to minimize costs of course.

    And no I don't think of them fondly, the company is simply too cheap to upgrade. When I arrived, some of the servers were over 10 years old. I strongly suspect that when I leave my now 5 year old infrastructure it will become another "Pet" for the next poor sod that has to work them.

  17. Frank Rysanek

    Sustainable push forward

    So you've virtualized all your legacy boxes. You haven't just imaged the old versions of Windows, Netware or whatever have you - you've even installed modern Windows versions in the VM partitions, reinstalled/upgraded/replaced the apps etc. Instead of a 42U rack cabinet, you now have a pair of modern quad Xeon servers (because if it was only one server, that would be a single point of failure, right?). Now finally you can juggle the system images at a whim and Ghost has become a fading memory. Oh wait - for a proper virty orgasm, you need an external storage box to centralize your storage, of system images and data volumes. Heheh - or two storage units, to avoid the single point of failure... because disk drives, RAID controllers and power supplies are all eventually perishable. Fortunately the storage attachment technology doesn't matter much (SAS/FC/IB/PCI-e/iSCSI?) as long as you have a way of getting your data out of the old storage box a couple years down the road. To the hypervisor, they guest images are just files - so you only need to have a way of moving the files around (actually forward). Next question... will your system images of today, be compatible with a future hypervisor release 5 years down the road? What about 10 years? Will your colleagues 10 years down the road be able to maintain that old hypervisor, to restore the host OS from backups onto bare metal? Ahh yes - you can upgrade the host/hypervisor OS regularly / incrementally through the years. If you have a myriad OS images with non-trivial virtual network interconnects between them (just a LAN and DMZ with some servers in each, plus a firewall in another partition) - will your colleagues 10 years down the road be able to find their way around this? Yes of course - it's a matter of proper documentation, and passing the wisdom on... Will the virtualization make it any easier for your successors? Isn't it a matter of replacing one problem (supporting old OS on old bare metal) with the same problem in a more layered and obfuscated reincarnation? (supporting your host OS / hypervisor on the ageing bare metal, and supporting the old guest VM's in potentially new host OS / hypervisor releases?).

    To me, the article is pretty disturbing. I do feel the old git taking over in my veins...

    1. Anonymous Coward
      Anonymous Coward

      Re: Sustainable push forward

      Will the VMs still work in future on new hypervisors and hardware. The answer is an obvious Yes. Having worked with every flavour of ESX since its release and numerous server hardware platforms, networks and storage systems as they have been released - in every case the VMs kept on working with only the occasional reboot as the latest tools (drivers) were automatically installed.

      All this continues to reinforce the evidence that the doddery old greybeards have no clue and have failed to keep pace with tech over the last 10 years.

      1. Frank Rysanek

        Re: Sustainable push forward

        That upvote is from me, thanks for your response.

        In my case, it's indeed a matter of being somewhat inertial and lazy. The scale is relatively small, hasn't pushed me in the right way very much. I'm not a full time admin, and the 4 servers that we have in the cabinet (2x Windows, 2x Linux) are not much of a problem to cope with. An upcoming migration of an important app onto new windows (2003 end of support) will raise that to 6, temporarily (read: until everybody moves their small internal projects to the new infrastructure, read: possibly forever). So far I've been approaching all this by keeping the hardware uniform and keeping the Linux distroes hardware-agnostic. I'm doing any revamps of the Windows hardware in "waves", to save a bit on the spare parts. We're a hardware shop ourselves, so I always have some old stuff lying around - all I have to hoard is an extra motherboard in every wave. There's a server or two elsewhere in the building - machines that I prefer to have physically separate from the main cabinet.

        Other than the small scale, I'm a classic case for virtualization - I have Windows and Linux, and I'm too young to be a conservative old fart (which is how I actually feel in many respects :-) = I hardly have an excuse for my laziness...

        Regarding potential virtualization, one possible consideration is the organizational environment. I'm ageing along with a stable gang of merry colleagues who are less conservative than I am, but more in the way of "if it's not click'n'done, like Apple, there's something wrong with it". On the job they're all well versed in managing Windows across different pieces of physical hardware, and are even ahead of me in terms of virtualization on a small scale (for testing purposes of Windows Embedded images etc.) but - they're not very good at debugging firewalls and heterogeneous tech. I'm wondering what an additional layer of indirection would present to them, if I get hit by a car someday... it's indeed a matter of documentation and deliberate internal sharing of knowledge. Or outsourcing the whole darn internal IT thing (in a house full of computer geeks).

        After your comments, my impression of virtualization boils down to approximately "within a time span of several years, it will decouple your OS versions from the physical hardware and its failures, you will only have one physical machine to cater for, and yours will be the choice when to migrate the VM's to less archaic OS versions (which you have to do anyway, no escaping that) = at a time when it suits you."

      2. John H Woods Silver badge

        Re: Sustainable push forward

        "...doddery old greybeards..." --- AC

        ... is that twice now you have trotted out that offensive ageist cliché? It's not making you sound as clever as you think it does, possibly quite the opposite. I've met some very bright people in IT, not just in the group 25 years younger than me, but up to 25 years older, too. As you seem to think the solution to all legacy IT is just to spend umpty million I cannot believe you have any real world experience in the industry, otherwise you'd know that all kit is legacy, it's just a matter of degree --- and that management have other priorities than making sure you are happy with your kit.

        ** And, not that it matters, but some of the latter camp are so far from "doddery" I wouldn't put any money on you staying upright for 5 seconds if you were brave/stupid enough to say it to their faces outside of the office.

  18. Rainer

    How do you virtualize old hardware?

    First of all, modern version of Vsphere only really support a limited number of old OS versions (with maybe the exception of Windows).

    Take the case of FreeBSD. Only the most recent versions support VMware's vmx network interface. Previously, you needed the emulated intel NIC. But older versions of FreeBSD didn't even have a driver for that.

    (Though, in the case of FreeBSD one could just install 10.1 and the compat5 package and just use all the old binaries - but how many people actually know that?)

    Then, the p2v-tools don't support anything but Windows and Linux (with a couple of restrictions, like that you can't virtualize a Linux software-raid. Or couldn't last time we tried a couple of years ago).

    And who wants to virtualize old software anyway?

    Of course, there's also the customer.

    We've recently tried to persuade a customer (who's really a department of a much larger customer with _very_ deep pockets) to virtualize his stack of DL380G5 servers (purchased in early 2008).

    But due to some overlapping plans with his software-refreshment cycle for the 3rd-party app they're actually running, they've now renewed their lease of these old machines. But at least, I'm re-installing them with FreeBSD 10.1 (I think I previously upgraded them from 6 to 7 and now 8). And at least, they can run 64bit software.

    And we've only got to make them last for two more years.

    Hoarding spares....

    Then, there's the slew of customers with PHP5.3 apps (and some PHP 5.2 and some PHP 4, incredibly) where migrating the app to PHP5.5 would mean a complete rewrite (usually typo3-based websites with custom extensions). Because the customer often can't do the rewrite himself, he has to pay an agency. Obviously, the money for that is sort-of not coming forward. Sometimes, the customer is the web-agency, sometimes the actual customer.

    A while ago, I migrated the data off a Solaris10 file and mysql-server to a FreeBSD10 system. That Solaris 10 box hat close to 2000 days uptime. 2000 days without patches.

    But patches with Solaris were always a bit of a hit-and-miss and after Oracle bought Sun, you couldn't download them for free anymore anyway...

    Software doesn't age, but hardware does.

    1. Anonymous Coward
      Anonymous Coward

      Re: How do you virtualize old hardware?

      "Take the case of FreeBSD. Only the most recent versions support VMware's vmx network interface. Previously, you needed the emulated intel NIC. But older versions of FreeBSD didn't even have a driver for that."

      Have you actually found that to be a problem? A couple of years ago I ran some tests through a pfSense 2.0 system that had e1000 NICs in it. That would be a pretty old FreeBSD. This was on a ESXi 5.1 three node cluster of Dell PE 610s with Dell PC 62xx switches and quite a lot of other stuff going on.

      I got quite close to wirespeed routing for 1 gigabit.

      The hypervisor aware drivers are handy, depending on your workload. It allows the HV and VMs to cooperate rather than the HV enforcing.

      I do call bollocks on this though: "But older versions of FreeBSD didn't even have a driver for that." How old? em has been around for quite a while ...

      1. Rainer

        Re: How do you virtualize old hardware?

        You're right:

        " The em device driver first appeared in FreeBSD 4.4."

        https://www.freebsd.org/cgi/man.cgi?query=em&apropos=0&sektion=0&manpath=FreeBSD+6.0-RELEASE&arch=default&format=html

        But it took a while before it was as great a driver as it is now.

        The 4.x and 5.x days are a bit hazy in my memory - and I didn't have access to Gbit technology back them (IIRC).

        We do have one or two servers with FreeBSD 5.3 or 5.4. It's supposed to be replaced any time soon...

        I didn't set it up, though. Almost all the servers I've setup over the years run a supported version of FreeBSD.

  19. Charles Smith

    These new fangled VMs

    Heh Heh I never did trust thwn new fangled VM machines when I was running them on the old IBM 360 type machines in the 1980s. Sure it is wonderful to have new shiny toys in your play room, provided you can convince the business owners to provide the funds to convert the old systems that are running "just fine" at the moment.

    Emulation of hardware/software doesn't always provide a solution. I had to deal with a situation 15 years ago where a section of code written in Wang COBOL was resolutely single threading on IBM mainframes and refusing to run faster regardless of the size/number of processors thrown at the task. It was an absolute bottleneck in terms of transaction processing throughput. Original documentation of the business logic encoded in the section of code had long been lost in multiple company mergers and "rationalisations.

    The problem is only going to get worse as program developers move further away from the hardware code protected by increasingly complex layers of middleware software.

  20. Disko
    Trollface

    Because yeah, everybody knows.

    Management at corporations around the world always listens carefully to IT people raising pragmatic arguments about the cost of maintaining legacy mainframes and the like, and how rebuilding essential systems is really something we need to do for a myriad reasons that have nothing to do with disrupting business processes or spending tons of money, and they all understand and agree fully that this migration is necessary for future compatibility, they know full well that virtualising systems does not make them disappear into the clouds, and writing off hardware that was once the very core of the company, and cost the kind of money that you could still buy a rather decent house for... managers really love to jump headfirst into all of this - it's just another business challenge, and never in the first place did they listen to the pointy haired salesperson in a suit telling them that this very system, that you now so disrespectfully call legacy, was cutting edge technology and will "give the company a solid advantage over the competition" even if everybody has had to change the way they work in more pointless ways than they care to remember to accommodate the beast. Management didn't buy a system in the first place, they bought a sales pitch, a bunch of lunches, a whole lot of sucking up. The big name badge on the big bad box in the basement is just there to show the board they are doing things right, buying Big Iron. Calling it legacy doesn't make it sound like you think the whole company is actually over the hill... No manager would ever shy away from changing something they barely understand, to something that they understand even less, nor care about what any of it really does in the first place: they are intimately familiar with the massive talent of their inhouse IT people and trust them to know what the company needs. We can all hear them chant: forget paying that paying the same old familiar power bill once more, nobody is asking why we pay the power bill anyway: let's have a migration adventure and see where it gets us! The board will certainly approve of all the spending on racks with anonymous kit and developers and staff and licenses and new storage arrays and cloud providers to create something that does exactly the same as that big box in the basement has done for the past two decades.

  21. OzBob

    The problem with legacy hardware

    is generally that the software service it provides is either beyond the capacity of current programmers to reproduce, or too expensive to warrant the effort to reproduce. (My previous employer running a risk management tool written in Cobol, ported multiple times and ended up running on a huge HP-UX system).

    There used to be a market for migration tools in the late 90s / early noughties for certain types of OS or Application, but they were riddled with problems and generally crap. Now, budgets have dried up and enthusiasm for the risk of change has dropped.

    (I am currently right now at work, migrating an application from an older Linux box which is exposed to Heartbleed, to a newer Linux version that has a patch out. It took news headlines to get the customer to pull finger and pay the money to move. And there is a good chance the interfaces that break upon this move, the customer will have no idea how to fix)

  22. PJF

    IRQ asmnts?

    How do you tell a V.M. a specific IRQ to run a specific piece? Remember PnP (Plug-N-PRAY)?!

    You know, when you had to physically move jumpers 'round to assign MRQ's? Making sure that nothing else is going to interfere, so at least you'll have video, and a kb to work out the problems...

    1. Frank Rysanek

      Re: IRQ asmnts?

      I only know the hardware side of this, never actually tried to use them in a host/hypervisor... so I cannot tell you how it's done.

      The "virtualization" support in hardware comes in several degrees.

      1) VT-x - this facilitates the virtualization of the CPU core. I understand that the host/hypervisor must provide the "virtual" devices (storage, network, KVM) via its own guest-side device drivers (decoupled from actual hardware). In other words, the hypervisor SW must mediate all I/O to the guests, the guest OS merely lives happily under the impression of having a dedicated CPU.

      2) VT-d - essentially this allows you to assign/dedicate a physical PCI device (or even PCI dev function) to a particular VM guest instance. The secret sauce seems to have several ingredients, and IRQ's are just one part (the easiest one, I would say). I've recently found some notes on this (by no means exhaustive) in Intel 7-series PCH datasheet and in Intel Haswell-U SoC datasheet (vol.1). Interestingly, each doc explains it in a sligtly different way. I recall reading about the possibility to invoke a selective reset of a single physical PCI device (actually a PCI dev function), about delivering interrupts to a particular VM, about making DMA (several flavours) virtualization-aware (compliant) - and I must've forgotten a few more.

      Only some selected on-chip peripherials lend themselves to VT-d (they're listed in the chipset datasheet).

      3) SR-IOV - allows you to "slice" a physical device (peripherial) into multiple "logical partitions", where each "partition" appears as a dedicated physical device to its own assigned VM instance. It's like VLAN's on PCI-e, where SR-IOV aware peripherials (NIC's, RAID controllers) know how to work with a "VLAN trunk". SR-IOV can not only cater for multiple VM/OS instances through a single PCI-e root complex, it can actually cater for multiple PCI root complexes as well - allowing for multiple physical host machines to share a PCI-e NIC or RAID for instance (or a shelf of legacy PCI/PCI-e slots).

      VT-x has been there for ages, in pretty much any modern CPU.

      VT-d has been a somewhat exclusive feature, but becoming more omnipresent with newer generations of CPU's and chipsets.

      SR-IOV needs VT-d in the host CPU and chipset, and most importantly, the peripherial must be capable of these "multiple personalities". Only a few select PCI-e peripherials are capable of SR-IOV. Some NIC's by Intel for instance. Likely also FC and IB HBA's. As for the multi-root-complex capability, this requires an external PCI-e switch (chip in a box) that connects to multiple host machines via native PCI-e. Or, the multi-root switch can be integrated in the backplane of a blade chassis. A few years ago, multi-root PCI-e for SR-IOV seemd to be all the rage. I recently tried to google for some products, and it doesn't seem to be some much en vogue anymore - or maybe it's just so obvious (implicit in some products) that it doesn't make headlines anymore...

      As for IRQ's... IRQ's alone are nowadays message-signaled for the most part (for most of the chipset-integrated peripherials). PCI-e devices are per definition MSI compliant (MSI = one ISR per device) and most of them actually use MSI-x, where one device can actually trigger several interrupt vectors (ISR's), such as "one for RX, one for TX, and one global" with modern Intel NIC's. Even before PCI-e MSI's, the IO(x)APIC present in most machines since maybe Pentium 4 can route any IRQ line to any CPU core (any CPU core's local APIC). Considering all this, I'm wondering what the problem is, to assign a particular IRQ to a particular CPU core (running a VM instance). Perhaps the IRQ's are the least problem. Perhaps the difference with VT-d is, that the mechanism is more opaque/impenetrable to the guest OS (the guest OS has less chance of glimpsing the host machine's full physical setup and maybe tamper it). That's my personal impression.

      IRQ's on PCI are, per definition, PnP (except for some *very* exotic exceptions, where you can specify in the BIOS, which GSI input on the chipset triggers which interrupt number in the IO/APIC, or where you can jumper a PCI104 board to trigger one of the PCI interrupt lines, one of your own choice). In a virtualized setup however, the IRQ routing must follow the admin-configured setup of "which VM owns which PCI device". PnP with human assistance, I would say.

  23. Paul Hovnanian Silver badge
    Devil

    It all made sense

    The arguments about legacy hardware vs keeping apps ported to current platforms, consolidating lots of single purpose hosts into their own VMs, reducing the physical IT footprint and utility bill. All good arguments.

    But then, in the last sentence, they said 'cloud'. And I sensed the presence of some cloud service sales rep whispering in my CIOs ear.

    1. Anonymous Coward
      Anonymous Coward

      Re: It all made sense

      "The arguments about legacy hardware vs keeping apps ported to current platforms, consolidating lots of single purpose hosts into their own VMs, reducing the physical IT footprint and utility bill. All good arguments."

      And all ignoring two factors that virtualization can't fix: you're still managing the same number of independent software setups, and you're still paying the licences for the same number of independent software setups. Not to mention the licensing cost of virtualisation.

      1. Paul Hovnanian Silver badge

        Re: It all made sense

        "And all ignoring two factors that virtualization can't fix:"

        In these cases, virualization is like chicken soup. Will it help? It couldn't hurt.

        1. Anonymous Coward
          Anonymous Coward

          Re: It all made sense

          "viralization is like chicken soup. Will it help? It couldn't hurt."

          Whether it hurts or not depends on whether it's used as part of a genuine solution (harmless?) or as a means of avoiding a genuine solution (not harmless).

  24. ecofeco Silver badge

    Old crap is more like it

    The last place worked I was assigned to help asset management clear out the store room. It was the size of a 10 car garage and literally filled to the rafters.

    I estimated at least $2 million in 5+ years old gear was finally shipped. That was just the first round.

    Almost every place I've worked has this same problem. Why is that?

    1. Paul Hovnanian Silver badge

      Re: Old crap is more like it

      "Why is that?"

      Because management funds the development and deployment of an app. Once tht's done, the funds dry up. And it's the IT department's responsibility to keep the disks spinning and the hosts up. But nothing more.

      Try going to management to request ongoing funding to keep applications current and ported to the latest platforms and see how far you get. IT management 'heros' are made when these legacy systems finally break down and the spare parts hoard for their servers runs out. The person that spearheads your companies program to finally get off IE6 will probably become a potential CIO candidate. If the grunts in IT had managed to keep it current with everything up through Chrome, nobody would notice.

      1. ecofeco Silver badge

        Re: Old crap is more like it

        You nailed it Paul, (it was a rhetorical question, but that was a good explanation)

        Other reasons:

        - the asset manager is a slacker

        - their supervisor is a slacker

        - the actual owners of the gear don't want to be perceived as throwing the equipment away.

        - the accountants are trying to maximize the depreciation writeoffs as long as possible

        - the board can't agree on who should get the salvage rights (friend? relative? shell company?)

  25. Anonymous Coward
    Anonymous Coward

    From the teat of marketing...

    While many of the points are completely valid, this smacks a lot of someone who's been on a sales training course for any one of the major server vendors?

  26. Andrew Punch

    Managemint

    I am a software engineer and have seen the problem first hand.

    Me> The software on this box is out of date and not supported

    PHB> We must have the lastest and supported and ra ra ra

    Me> We need three weeks of development time and a couple of days of sysadmin time

    PHB> Oh don't do that sales wants X

    ===

    3am

    ===

    PHB> OMFGBBQ the shit has hit the fan

    Me> Yes that is because we are running out of date software. Now our customers are pissed

    PHB> No time for that, just keep it running

  27. Glenn 6

    I think the author may be talking about hosted VM solutions like Azure or Amazon Web Services (AWS).

    To make all that fancy "click and magic" work there's a lot going on that still requires bare-metal servers, infrastructure, and us pesky IT people he wants to get rid of.

    There is also a downside to virtualization he's not touching on, of course, as he's clearly biased against IT people. My experience as follows:

    1) Lack of control over your environment at the low-level means sudden, unexpected downtime from the VM provider for one reason or another. Be it a failure, or "Hey I need to move you to another Hypervisor for the 3rd time this month".

    2) At the bottom of all this virtualization there still lies bare-metal servers and equipment that we need to maintain. And the underlying software that makes a VM a VM. As much as he's love to eliminate as many IT jobs as possible, and replace them with apparent cheap, underskilled basic operators, there is still a need for the expensive, highly-skilled highly-experienced old-time sysadmin on the backend who understands everything from hardware to IP networks.

    3) Single point of failure. One server for 30 VM's goes poof, you lose 30 VM's. That is of course unless you've spent twice the money for a duplicate rig and failovers.

  28. jcitron

    Sometimes you just can't replace anything...

    In some cases there is no option but to continue to use the legacy equipment, whether it's due to the proprietary software running on the hardware, or due to the cost to upgrade is too high.

    So you hire someone who knows how to run old stuff. What's wrong with that? There are plenty of us older IT guys who can do that. We're not just button clickers who play with virtual environments. We can do that too so it's a win-win situation for the company. Besides, we know what computers really look like inside and can usually troubleshoot problems before calling in a repair.

    I worked in such an environment for 11 years. Our MSSQL database ran on a 1999 Compaq (not HP) Proliant 5000n. The company purchased this new with the database. The database vendor had long disappeared, but the company that purchased it could not afford to upgrade. To put it mildly, it was more important to meet salaries and pay taxes than it was to upgrade the server.

    So there was our computer room, which of course if you saw it you'd cringe. We had 7 4.0 SP6a servers running. They were backed up daily, 7 days week. During the 11 years I was with the company, I lost two power supplies and two hard drives on two different systems. During this period, the servers were shutdown and rebooted monthly, kept cool, drivers updated periodically, patches applied, and antivirus packages were kept up-to-date. We never had a virus, and we never had a BSOD.

    Sure they were old boxes, not those fancy things up in the cloud. They worked and worked hard until the company closed.

    I'm a firm believer in KISS. Sure all this new stuff is great, but there are cases when disturbing a working environment is more costly than it is worth.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like