There is no statement quite so eloquent nor action so elegant;
as a smack in the mouth!
If you meet him Bill let me know , I'll back you up.
And that's one of the easier chores our reader found himself faced with in a new temp job. Most weekends, our On-Call feature looks at the odd situations readers find themselves in when called to do something on a client site or in the dead of night. This week we're making an exception for reader “Bill”, who rates himself as “ …
I've been in this situation several times, and in each case the predecessor was not a sysadmin. He was a member of staff that had some technical literacy, that started with the system when it was around 5 workstations using workgroup sharing. The guy looked after it while doing his normal day to day job. Over the years the number of desktops increased, the guy while still doing his day job figured out how to install windows server and exchange, it probably took him a few attempts. His day to day job didn't relent, and he found himself working later and later into the evening. As the number of users increased he became the help desk for every jammed printer and power cycle. Fast forward a couple of years his normal work is suffering, he no longer has the time for IT and he's mentally exhausted from the stress of keeping a multimillion £ company running. Then he leaves. The company takes on a new person to do his non-IT job, and in we get called to look at the IT.
The owners of the company probably have no idea what he was doing to keep the system going, after all he'd just always done it for years, they never needed to spend money.
"The owners of the company probably have no idea what he was doing to keep the system going, after all he'd just always done it for years, they never needed to spend money."
That's exactly the problem I see repeatedly, along with the mentality that everything can be done on a desktop windows box.
"Idiot boy" quite likely burned out and has never learned decent practice as this wasn't his area, nor was he paid enough to do it properly. At some point he either signs off sickj, or says "fuck it" and goes on a richly deserved vacation.
This happens with volunteer stuff too. At some point people expect that things get done and can be pretty rude when they're not. At that point the volunteer might well say "sod this" and walk away from it.
This happens with volunteer stuff too. At some point people expect that things get done and can be pretty rude when they're not. At that point the volunteer might well say "sod this" and walk away from it.
That ties in with an earlier discussion where I stated I would never take on work for a charity. This is basically reason 2 after "unlikely to ever get paid": you tend to inherit a mess which is a bit like a house of cards or an erect member: it only stays up as long as you don't f*ck with it (my apologies for being crude here, but I suspect that anyone who has ever been near such a situation must be a saint not to utter a few words that would get bleeped out daytime TV).
The problem is then that YOU were the last one to touch it, so it's your fault now and you end up fighting to get this mess into some shape before you depart - without any expectation of payment, or appreciation of the valiant battle you fought.
60 users? It would have been easier to make a new up to date golden image and redeploy via FOG. Office updates alone will swamp a PC for an hour or so "in the background" never mind service packs (4 years of updates will probably include a service pack) WSUS for 60 users isn't that network intensive, same for SQL, if it has enough RAM and disk space then probably enough. Exchange really needs its own server.
I inherited a network 10 years ago running a pair of 2k3 servers, both were P4 xeons (poweredge 2800) with 2 gig of RAM each. One ran IIS, SQL and profiles (domain controller) the other had exchange 2003 and was a domain controller AND the enterprise cert server plus shared drives.
No central AV, each of the 200 desktops updated individually.
My god, you have never experienced pain until you have seen exchange trying to run on 2gig of RAM. (It was a school with 80 staff and 600 pupils). Apparently they had spent 100k upgrading the network 3 years previous. I wanted to see receipts for what he had spent. The majority went on new cabling (from cat5e to cat5e) and new network cabinets oh and a silly great overprovisioned massively tape drive for each server.
At least the 100gig RAID 5 server drives were backed up onto tape each night.
That, and also trusting Windows to free up space automagically was probably not the brightest move, especially in that case. On old boxen (even relatively well-managed ones), this leads to disaster more often than not.
I can understand the state of mind that led to the decision, though.
Some IT services companies aren't any better. I took over at one company and the IT company had never patched the Windows XP machines and they had never run virus scans on any of the PCs. Better still, my employer had Exchange Server, with every account externally available via Outlook Web Access. The problem? The service had reset all of the passwords for every user and stopped them from setting their own passwords, so every user in the company had the same password, 12345!
The first thing I did was disable OWA, set all passwords to need to be changed and only turn OWA back on as users requested it - from 200 employees, 2 users who actually knew they had access.
The reason why they had never run AV? The Windows XP machines still had SP1 and 256MB RAM, I turned on local scanning, the PCs were unusable for nearly 2 days, for scanning nearly empty 40GB drives! I managed to get most of the machines upgraded to 2GB RAM and patch them.
I also had an IT department who thought Linux servers didn't need patching because, well, Linux. I mean SUSE Enterprise from 2001 isn't going to be vulnerable to anything, is it?
> Some IT services companies aren't any better
But you can only say that if you know the whole story.
I work for what some future person might call "one of those It services company that's no better" - but with some customers you just can't do things right. They won't pay you for your time to do stuff (especially at out of hours rates), they won't permit the downtime to do it, you send emails to the director responsible pointing out that they've had no backup for months, that servers need patching, etc, etc, etc and it still has no effect.
With some customers you get used to adding a footnote to all emails along the lines of "and we take no responsibility for any data loss of disruption". So, still the IT services fault that the server hasn't been patched for 5 years ?
Anon for obvious reasons - we have a customer that has begrudgingly allowed some patches to be installed and a backup taken. But only because the age of the SQL server has made something break with another package - one where there's a "if you don't upgrade, we won't support you" clause in the agreement, and that package effectively is their business (business sector specific system - without it and the data it holds, they are gone). And the backup only got done because we point blank refused to apply any patches without having a full backup first.
But that's still only the patches needed to fix the SQL problem - not any other SQL or OS patches !
And if that sounds like a bit of whining about getting the blame for other people's faults - then that's how it should sound.
Chances are he had a motivation for his final conduct. To top it all off, after years of asking, the company refused to either reclassify him as a sys admin and give him the better salary, and instead gave him grief for his under performing original job. So he was being expected to be a sys admin, and do another job, at a lower pay rate. I can't say I blame him as I have seen a similar situation. It all began with the best intentions but over the years he became taken for granted and marginalized in the company structure.
You were probably the first qualified IT person to ever enter that room. No doubt the company only hired you after failing to find another sucker to take on the task in house.
Left our place (school district) under a cloud, within a week the historical grades for all of our High School students disappeared in a puff of virtual smoke. Backups were consulted and were found to have last been completed 9 months earlier. Anti Virus software was found to have been subscribed to on a site license basis and left uninstalled for 18 months, a full license audit found massive discrepancies in the licenses held and the number of installations. All in all, a big mess, hardware wise it was just as bad, took the replacement nearly 2 years to figure it all out and get it into a fairly decent working state.
Sounds like a temp job I had at a non-profit school district 3-4 years ago(which got me to quit IT altogether)
Their prior IT director(guy who barely know how to install windows) quit, and someone I know got the job so he called me up saying he needed help fast. I decide to help him.
Get on site to the one school... network cables are thrown on the floor where students walk on them, and roll their chairs over. Actually the one room had a friggin hub flung on the floor that was all bent up with footprints on it, and that was the best part of their network design...
They were wondering why their server was acting up, and randomly failing, and the backup system was not functioning. Their backup system was some proprietary automatic tape library thing that the company disappeared that manufactured it, and the last known software for it was windows 95... Best part this was all stored in the friggin JANITORS CLOSET with absolutely 0 ventilation. I opened the door I felt like I was in some sauna that reeked of dirty mop water. No no reason at all for a server to randomly shut down, and fail.
OK main organizations office's server completely failed(their power line got hit by lightning 2 days prior, and took the server with it)... This one had a working backup system though. Problem is the only thing that was set to backup for 5 years was the backup software's directory... Yup the last admin/IT director did every other day backups of the backup software directory only. They got mad at me that I couldn't restore 5 years worth of financial data...
Next school... This one actually had a decent layout as it was done by an IT company. The last IT director signed a contract with a tech company to have a T1 installed for it. They needed it running for when school started, problem is though the last IT director signed a contract for it to be ready 2 months AFTER school started. Guess who was at fault? yup mine cause some idiot signed a contract 4 months before I got hired... It also didn't help that the principle was a complete fuckin idiot who seemed to enjoy making my life hard. He complained he needed a bigger monitor(he had an 18 inch) so we ordered him the next size up from the distributor we had credit with which was a 22inch monitor. How many people would complain about getting a 22inch monitor? Instead of telling IT anything he instead went and complained straight to the head of the organization claiming we weren't doing our work. I wanted to beat this guy so hard upside the head it wasn't funny.
Next school... Server room is a coat closet with 0 ventilation in the main office. Difference is this school had a principle smart enough to know computers don't like heat so he had a door stop holding the "server room's" door open. This place didn't cause me much issue till their projector they never tested didn't work. Well last IT director had it installed by a company, BUT had the wrong wire ran for it... He had RGB ran for it while the projector they had only had a VGA port on it. Had to buy a long ass VGA cable, and re-run it. Still no where near the pain of the other buildings I mentioned as like I said the principle wasn't completely technologically retarded, and understood it took more then 25 seconds to get stuff working.
Last school was built by me, and my buddy ground up. We picked servers, routers, and did everything to get it running. Only issue I had was some software they demanded for their library that they wanted on the server wouldn't work so I grabbed a spare workstation, installed the stuff on that, and put the computer in the bottom of the server cabinet(yes we bought a real cabinet unlike the others) that was in a climate controlled room, and let it run on that. Also made a script to auto backup all data to the main server so it would get on the tape nightly. I liked working in this building as the only issues that ever went wrong was the teachers couldn't remember their passwords(which were all 12345...)
outside the last school which I built all the workstations were also donations from companies so they were old beat up workstations. I would have to respond to an inquire by the head of the place EVERY single time I needed ANY part as IT depts budget was blown. Thing is it wasn't blown by us the secretary, and her friends decided they all needed $5k laptops, and took the cash for it out of our budget.
Then after we finished they demoted my buddy without notice, and hired some other dude to be IT director. We both quit the next day as I only did the job to help him, and they kept blowing off hiring me as full time(was supposed to be temp to full).
This should be above the final paragraph as to why we quit without warning.
Ohh remember the turd I said signed a contract, and I got the blame? Well he was good friends with the secretary that ordered ludicrously overpriced laptops. She didn't trust me, and my buddy so she gave him access to the network remotely(yes the secratary was required to have the admin passwords by the organization), and let him dig around, and change settings. He screwed some stuff up, made a few backdoor accounts(which we promptly deleted, and banned his ISP) and when we went to complain to the head of the organization he accused me, and my buddy of crap as he trusted his idiotic secretary as she claimed her friend knew more(yup hub in the middle of the room guy).
For the finale we found out like a year later that the organization lost faith with all the backers(their main backer really liked me, and my buddy), and got broken up, and most the idiots lost their jobs that gave me hell :D
Firstly. Kevin. I'm sad to hear that you had such a bad time and it's unfortunate that you left the IT field as it sounds like you had the commitment and drive to do a professional job with limited resources.
secondly. Everyone. Please forgive me for going off topic. Can anyone one please explain what is meant by 'non-profit school district'. I first took it to mean similar to state run schools here in the UK. Then I thought it may be like a collection of academy schools run by a charity. Are my guesses any where near the mark?
Well its a non-profit company which runs a charter schools (should have probably clarified that a little)
Essentially(to quote wikipedia) a charter school is a school that receives public funding but operates independently of the established public school system in which it is located. To show how far they can get in the organization 2 of the schools(which were also the most backwards design wise) were schools which had religious classes as part of the curriculum they were also 100% privately funded. The other 2 were partially funded by the city so they couldn't have any religion classes.
In the UK from what I read on the wiki page the similar version is something called foundation schools.
As for dropping outta IT that was just the end of it. I just fell out of love with it as I've worked for a few companies, and pretty much every one treated the IT guys like crap even though without them there is no way they could do their jobs. Add in competing for jobs with new people in IT whose skill sets only seem to be being able to color with a box of crayons, and it gets quite annoying also due to the low pay. For instance that last job I did the pay was so low I could have made similar cash flipping burgers. The pay would have been a tiny bit lower, but wouldn't have the level of stress that helps cause health issues. The pay, and treatment I will say could just be a regional thing though.
Sounds like you got away easy.
This is just for my present position, but I was hired by word-of-mouth as I happened to be in the job market at exactly the same time that a disaster befell this particular workplace. I was snapped up after they'd done most of the initial firefighting (please bear that in mind) and have thus far witnessed the following:
1) One server. Literally. One. Running 500 users. That setup was in the process of being replaced when I started by the following setup: One server running all the user stuff, another running the SQL server (including payroll), the print system and the phone system, and some shared areas, and backup software, all kinds of junk. Ironically, they had some of the most powerful servers I'd ever seen running Windows thin-clients - powerful enough to run 50+ user sessions. They never got used and everyone hated them, but the servers outclassed everything else in the server room (but were sadly quite old - floppy disk era too - and we just replaced them. Back in their day they must have been TOP of the line). They were sitting idle while the one-server did all the work until it fell over.
2) A set of data-recovered failed RAID disks, in a box. Previously resident in the single server. £10k to recover and they never got all their data back. User profiles and documents had been recovered from CLIENT ROAMING PROFILE COPIES! The recovered drives I had framed and hung on the wall with a plaque reading "Cogito Ergo Facsimile" (excuse the Latin - hopefully "I think therefore I make copies"?)
3) No backups. None. The guy was still getting emails about a freeware backup utility but hadn't even bothered to deploy that. There was nothing. No tape, no NAS, nothing except for what was on the server hard drive. And he had been there to ignore BOTH RAID failures. By the time I inherited it, there was some NAS boxes but also an illegal and unlicensed copy of Backup Exec on every server too.
4) No WSUS at all.
5) No client images (not even WDS, they just bit-for-bit copied existing machines!).
6) Exchange installed on the DC, thus making an unfixable and unsupported combination (officially, you cannot remove Exchange that's been on a DC because you shouldn't be able to do that in the first place - and demoting a DC server that's running Exchange is dangerous and likely to break both!).
7) Every cable measured TO THE INCH to the patch panels and crimped by hand. And often going through the centres of the racks so you couldn't actually insert anything more into the rack without de-patching EVERY CABLE and re-patching it. For one cabinet we had to pull an all-nighter just to rewire 24U. And we rewired EVERY cable in there.
8) I found a switch hidden in a radiator cabinet powered by a socket inside the floor (near a cellar hatch). That switch ran all the main office and wasn't documented anywhere. The uplink for it was Cat5 over 150m using internal cable that went externally and was thoroughly destroyed by the time I got there. Apparently that had been in place for several years and nobody knew about it. Until it went off.
Needless to say, I got triple-normal-IT budget in order to fix the problems. We bought a proper set of redundant blade servers, spread them over the site, multiple backup strategies, proper backup software, full virtualisation and service separation, a complete re-cable (including redundant links around the site and to the Internet) and it's now... well, quite impressive.
My boss has also indicated that next month we will have a full, live, in-service failover test. I think because I've made all these assertions about what should happen on a modern system and he wants to see if it's true. As in, he will "pull power" (not literally, but simulated by turning off machines gracefully) to one entire server location in the middle of the working day to see what happens. We are merely expected to provide "business continuity" (i.e. We don't lose data and thus bankrupt the company! Shouldn't be hard! Shows you what kind of IT they had previously!) but I'm actually expecting "service continuity" (i.e. nobody but us notices that anything has happened).
But that's not even the worse I've inherited. Hell, I refused to touch one charity's network that I was invited to work on. I had to literally say to them "I can't touch that" and they knew I was doing them a favour by saying so. It wasn't fully backed up, the backups were at a remote site they didn't have access to, and nothing on the desktops was in a state where I thought I could safely play with it, and they dealt with the medication records of dying children, etc. Sorry, I have no qualms about fixing it for you, but it's really in your interest to get a proper firm in - because the responsibility with that, given the state it was in, was so bad you wouldn't have been able to afford the price I'd have to put on that responsibility. Start again, get a proper firm in, and get some ongoing support while you're there. It will cost the earth, but it will be nothing compared to continuing on that precipice you're on of losing that data. I did make sure they had at least one sufficient backup before I left but that was all I could do in the time.
I'm sure people have worse stories too, but by comparison some "neglected" server settings and a single non-booting server (sorry, your solution of a note not to reboot it is NOT a solution, even temporarily) is nothing.
Every cable measured TO THE INCH to the patch panels and crimped by hand. And often going through the centres of the racks so you couldn't actually insert anything more into the rack without de-patching EVERY CABLE and re-patching it. For one cabinet we had to pull an all-nighter just to rewire 24U. And we rewired EVERY cable in there.
Master troll is masterful!
Awesome story, but...
"(sorry, your solution of a note not to reboot it is NOT a solution, even temporarily)"
Of course it is. Every workaround is temporary and necessary until a permanent fix is done. Putting a note on a server to remind yourself not to touch it until the replacement is done is perfectly sensible, the alternative would be shutting down backups entirely until new hardware is sourced, OS is installed, and software set up. At least it could mostly work in the days or weeks until the replacement was prepped, depending on the priority.
Only he liked to take things apart just to "see how they worked""...
After being called in to take over from this guy because he called in sick I was confronted with all kinds of equipment (servers, switches, routers etc.) strewn around the place and in pieces, just because this guy was better at taking things apart than putting them back together again !
So after spending half a day putting all this stuff back together again I told my boss that if I ever have to work with this guy I would probably give him a "boot to the head" and quit my job there and then !
1. Not all patches work, so they should be removed from the count.
2. Not all patches are to fix problems. Microsoft seems to regularly issue patches to harvest bytes of yours their previous versions of spyware may have missed. These too, should be removed from the count.
3. Some patches are patches of patches - remove again.
4. Some patches are for stupid devices no-one in their right mind should be using in this century - e.g. fax drivers for cars. Remove these from the count.
5. Some patches actually open attack vectors people use to get into systems. Probably very old versions aren't even probed. So some of these patches can be removed from the count.
6. Some patches are to prepare you for other patches - e.g. Windows 10. Definitely remove these.
7. Updating anti-virus software is a waste of time and bandwidth because all anti-virus software is rubbish.
So after all that, he probably only really missed around 15,000 patches, which is much better.
Besides, as long as there's a reliable firewall (say the BT Home Hub 4) between his systems and the internet, then there's absolutely nothing to worry about.
"3. Some patches are patches of patches - remove again."
And how does one know without manually auditing every single patch? Let's say a whizz with awesome powers of concentration can check a patch in 20 seconds. That is 138 hours, or fifteen and a half days doing nine hour days. And making no mistakes. And taking no more than 20s per patch. And not counting any time for actually applying, reboots, etc.
The icon is surely how anybody would want to feel.
"3. Some patches are patches of patches - remove again."
And how does one know without manually auditing every single patch?
How many people manually audit every single patch?
The update list probably contains updates to every IE versions between 6-11. If the clients can be updated to the latest versions then earlier patches can be dismissed.
25000 updates in WSUS sounds like it includes drivers and probably way more languages and products than were needed. Yes, I've seen cases where every product was selected for no good reason...
I'd first remove the drivers since if the computers already work you shouldn't try to fix them! Then I'd deselect all unneeded products and languages, and uncheck tools/feature packs and other classifications, create automatic approval rules and run the WSUS cleanup.
And how does one know without manually auditing every single patch?
WSUS tells you whether patches are standalone, or if they supersede or are superseded by (or both) other patches. It's very easy to select all superseded patches and decline them, as a starter for ten...
Also, given the job this useless tit had done, it wouldn't surprise me if he'd not selected the correct product types/languages, and appropriate levels of patching, which probably would have reduced the 25,000 considerably. Additionally, older versions of Windows included patches for Itanium/IA64 which a quick search/decline in WSUS would knock a fair few off the list too (guessing on a hunch that they weren't running Itanium infrastructure).
1) Button your lip until you fully understand the whole environment
2) Ask lots and lots of questions without passing judgment
3) Do an total audit. If that means lifting floor tiles then so-be-it.
Write up your findings( warts and all) and present them being prepared for one of the following to happen
1) Be told that it is none of your business (so you quit on the spot)
2) Be told that there is no money to fix anything (as above)
3) Be ignored (guess what?)
4) Be given carte-blanche to fix things (Do the job with a smile on your face)
5) Something else.
The situations described so far in this article and subsequent comments are not that uncommon IMHO.
I've seen the very expensive software I use on a daily basis configured in production with everything as it comes OOTB.
It was only by luck that those systems hadn't been crashing a 100 times a day before the problems were uncovered. Even so, it took Management more than 3 months to auth even the slightest change. Needless to say, I left that contract as soon as I could.
I'm sure that there are countless Dilbert stips that could amply describe the sort of problems that will be revealed here. If far too many is it the actions (or more likely the inactions) of the PHB's that have caused the problem in the first place. Their 'face saving' can even cause more damage.
2 Years of skipped patches, updates, and basic maintenance skipped at an accounting firm with just over 100 PCs is the worst I've seen. Most the Windows 7 computers had never had updates run, ever. Same with a bunch of the 2008 servers. The Exchange server had one patch level, maybe.
Everything worked, somehow, and of all things backups worked. I do feel sorry for the previous tech, the company had become so change averse that he was hamstrung by the fear something may go wrong that he had stopped doing any updates. Unfortunately this built up a huge maintenance debt and things started going wrong and he couldn't keep up, and they fired him because he didn't do his job.
They had another firm come in for a few weeks, and I assum told them they needed to change the way they did everything, and that, yes downtime had to occur. They got rid of them and I ended up on the project. Told them the same thing, this time it clicked and they figured out there was some kind of structure problem. Worked quite a few weekends since then getting everything caught up.
"They got rid of them and I ended up on the project. Told them the same thing, this time it clicked"
You were lucky. Such companies usually just keep firing consultants until they get one who tells them what they want to hear.
I've seen some multimillion dollars ballsups as a result.
"You were lucky."
It may not be luck. See the Rules post above. If you have documented what the current situation is, what's wrong (& what's right) and WHY (compare with currently accepted good practice) and what needs to be done to recover, complete with priorities, then it becomes difficult for them to argue. Difficult as in not having a legal leg to stand on if it turns nasty.
"Such companies usually just keep firing consultants until they get one who tells them what they want to hear."
Following up on my own post.
One or two consultants make a career of telling companies what they want to hear, do what's asked (usually installing a setup decreed by manglement which patently will not work as designed) and then putting in exorbitant extra charges for "adjustments" or "changes"
This can be a great wheeze if you're ethically challenged - I know one firm which made more than $5 million this way. In the end (after a "forced management change") the whole mess was declared unfit for purpose and replaced with an OpenSource solution that cost about $50k to implement and deply.
The people responsible (manglement and the consluting company) both ended up getting paid more in other gigs and creating even more spectacular ballsups. Be very careful if you see someone's references to "highly sucessful rollouts" and/or glowing employer references. Often they're mandated by lawyers as part of the cost of getting rid of the offenders.
"2 Years of skipped patches, updates, and basic maintenance skipped ... is the worst I've seen. Most the Windows 7 computers had never had updates run, ever. Same with a bunch of the 2008 servers. The Exchange server had one patch level, maybe.
Everything worked, somehow, and of all things backups worked."
What I find interesting from some of the comments is how so many so called professionals have swallowed the idea (or is it an urban myth?) that systems need to be constantly patched just to work, they don't! In fact they can be very very stable, hence why it is normal practise to disable auto-updates on a server and only install them as part of scheduled maintenance. The problem in many smaller companies is that without a well staffed professional IT function scheduled maintenance gets pushed onto the back burner and forgotten.
What is particularly interesting is that of the examples written about here is that none had a malware problem worth commenting upon that was attributable to the absence of patches...
IT guys should be accountable for their work. Their work shoukd be peer-reviewed, just as ANY technical workin ANY field should be. In firms where there is noone else who can do this an external organisation shoulx be engaged to audit and report on status/risk level/recommendations.
If as IT workers we find that we are not being held to account, ourselves, then we must be brave and honest enough to speak up and recommend to our employers that they get our work checked by a third party.
Just one thing : who is paying for all that ?
Sorry, but pie-in-the-sky intentions will never overcome the clueless manager whose hands is on the purse strings.
And that is the problem in every post of this kind of article. Issues cropped up because the managers put the budget on something that seemed more important until the amount of trouble was just too big to ignore - by which time, of course, things were much, much worse than they needed to be.
A proper manager should at least have an up-to-date list of logons and passwords, implying an accurate knowledge of what is plugged in where. Anything less than that and you're not negotiating helping them with their IT, you are in point of fact becoming the IT manager. Without the authority required for the job, you are doomed to either fail, or put in a lot more effort than you are being paid for.
>The outside consultants are often the very person described in the article.
Or worse. All too often cancer... although the cancer tends to betray its malignancy by referring to itself as a consultancy capable to monitize actionable latent added value leveraging core competencies to close the deliverables loop thusly commoditizing best practice at the end of the day without boiling the ocean.
Our hero could have his contract extended by a year or two, have plenty of time to make everything just perfect, and hand it over to the next guy or gal.
And the next guy/gal would STILL have a list of a hundred things that weren't done right by our hero.
It's a combination of absolute fact with variable expert opinions. Ratio varies of course.
He was taking some huge risks doing what he did considering this office was 40% of the money flowing out of a multinational.
If patches haven't been applied for four years, who knows what could happen when they are applied? He saw with that one server that was unable to restart without a lot of hand-holding due to lack of disk space from all those patches being loaded without properly evaluating the environment first. What if half the PCs in the office had refused to start up, or got into a blue screen loop, or any of the other possible outcomes that would have prevented people doing work?
The first task was not "patching then AV updates" it should have been checking backups, testing backups, verifying backups. Before he touched ANYTHING, he should have made 100% certain he had a way to go back to the previous state if he broke something. Then you verify the integrity of storage (i.e. all RAID disks present, no SMART warnings about impending disk failures in the servers and desktops) and the free space left. Then you verify the network is healthy and so forth.
Touching stuff and applying patches is well down the list - and you should restart every server and then every PC and make sure they come up OK and users can use them successfully for a day or two before you do that again with the patches. If you don't do this, you don't know if a patch broke something or there was something already borked that had nothing to do with all the changes applying 4 years of patches encompasses.
I popped a response to a post earlier. For the desktops I would have created a new golden image and rolled that out. 4 years of patches on an out of date system? No point updating. The system is beyond repair in reality and needs updating; if you aren't allowed to update properly then the company is in dire straits and I sure as hell wouldn't want to have a permanent job there, cash flow that bad will mean you wont have a job for long. Better to keep it ticking whilst looking elsewhere if there is no more money in the pot.
However, that likely involves reinstalling every piece of software in use. For a plain office shop, sure. For a system with lots of third-party software, that's a nightmare in itself.
It's one of those things you have to do, yes, but between that and just backing up and rolling out updates? I'll take the updates. Maybe roll out an image next year when you know what's supposed to be on the machines, etc. But you stand a chance of rolling back an update. You don't stand much of a chance of rolling out an image over existing machines without losing something - even if it's just a lot of time in reinstalling all their software.
First thing I did in my post above? Collect in every desktop and do a software audit. Useful for licensing but much more useful for "where the hell did that come from, why that version, who's got the disks, have we actually paid for this", etc. And, yes, in some cases we had software on every machine that they'd paid for a handful of licences for. I spent several £K just properly licensing what they had and thought they were already licensed for and couldn't live without.
>> Collect in every desktop and do a software audit <<
My current client provided machine loves doing an audit of itself, seemingly on an hourly and ongoing basis. Would make sense if I could actually install anything on the damned thing. Then they get the department head wandering around the office and crawling under desks to find out who's got what PC....
I wouldn't - gold images only work if the network is correctly configured, all files are stored on a fileserver (not the local drive) and there's absolutely no mission critical customised software on each desktop.
In this case, there is very obviously not a setup of that calibre. Patching is absolutely what I would do, testing on a sacrificial machine, then slowly rolling it out.
This also appears to be XP; SP3 was out years ago. Four years of patching is entirely standard.
For the desktops I would have created a new golden image and rolled that out.
Yeah - Reformat & Reinstall Everything - The standard IT "Support" one gets these days!
First problem is: Most shops already have got a PFY for that job and he is way cheaper than you are.
Second problem: Only secretaries have a standard windows set-up. If it is a financial place, there will be custom software, custom Excel VBA-apps, if engineering there will be design tools locked by licence managers (and kept alive even though the licenses was never renewed - et cetera).
Lose any of that custom software - or - force an upgrade of some relic design tool because License Manager and tar & feather is coming right up!
Before he touched ANYTHING, he should have made 100% certain he had a way to go back to the previous state if he broke something.
I think a step back from that would have been more to the point. This was touched on above, but first he should have made sure the state things were in. Second, he should have come up with several possible courses of action. Third, he should have consulted with management and obtained informed consent before proceeding on to taking any sort of action. Management should have to accept the risk of making changes, especially of this scope and nature. Allowing them to bury their heads and later deny everything when it all goes wrong is never a good strategy. Ultimately, management is responsible and it is a good idea to keep that in mind.
I've cleaned up plenty of messes (both my own and those of others). I have found it to be useful to let those above me know just how bad things really are, especially as it makes me look that much better after it's all sorted. On the other hand, having documented that the boss signed off on something and it turned out badly because of the decisions someone else made rather than something I did has proven helpful on occasion, too.
"True though difficult if you've a boss who's wiley enough not to commit things in writing/email."
1: If you have mangelment like this then the company's fucked anyway
2: It's amazing how small a pen-cam can be
I've seen someone deny agreeing to something, be confronted with video/audio evidence, deny the denial and then deny the denial of the denial - and this was supposedly a CEO. He was also fond of using threats of litigation to silence external critics.
I think the reason for him not doing this job correctly was because: Bill was also a cowboy.
Yes his predecessor made mistakes and most importantly omitted to leave behind key passwords, but from the evidence presented in the article, Bill started many of the fires he then had to fight, yes the place might of been a tinderbox, but Bill chose to light the match.
As others have noted Bill's role was to provide one month's support cover, handover to someone else and walkout himself. In this situation Bill should of focused on 'fixing' problems as reported and creating a "welcome pack" and a systems update/refresh plan for his successor; who may in the future return the favour and offer Bill an assignment or recommendation.
<quote>Before he touched ANYTHING, he should have made 100% certain he had a way to go back to the previous state if he broke something. </quote>
Agreed! But, you don't always GET that luxury, especially if the shit has already hit the fan. Like this one company where I had responded to an employment ad.
The first thing that stared me in the face were the numerous BMWs in the parking lot. As I waited for the interviewer, I chatted up the receptionist, and found out that those cars were leased by the company for the use of its executives.
After speaking to the interviewer, and before leaving, I requested an opportunity to 'review' the IT estate. This was back in 2010, and I found a number of Win 98 machines, and some early XP machines.. I pointed out the age of the estate, and inquired about the budget prospects for getting in new equipment. The response (some bullshit about increasing shareholder value) was inadequate, and I told the interviewer so. I then told the interviewer that I was no longer interested in the position because the executives had made their priorities clear, and I was not about to become their scapegoat.
The interviewer was shocked, and had 'wondered aloud' as to how I """understood""" the companies "priorities". I replied that the company executives would rather invest in toys for themselves, instead of tools for the employees to better perform their jobs. The interviewer didn't understand what I meant, so I pointed to the BMW's, and curtly remarked: "THERE are your fucking toys!"
Their owners sold them off in 2011, and the first move the buyer made was to shitcan the entire executive ranks.
"He was taking some huge risks doing what he did considering this office was 40% of the money flowing out of a multinational."
you have no idea of how many hours he had allocated to this task, or of what budget was assigned.
In general if you drift into the role via "I know a bit, shall I take a look?", it's expected to be done around your normal work for no extra pay - AND obtaining any budget for hardware is diificult-to-impossible.
"you have no idea of how many hours he had allocated to this task, or of what budget was assigned."
No, but we do know his timebox was 1 month with no opportunity for extension and from Bill's complaint about having to be the last one out each evening, it would be reasonable to assume he was expecting the job to be traditional office hours. Hence with incomplete information Bill, on his first day, decides to start a rather large systems update, with no regard for how long it might actually take or contingency for when things take a lot longer or go wrong, as they typically do when you have more than a couple of months/years worth of updates to apply.
"In general if you drift into the role via "I know a bit, shall I take a look?", it's expected to be done around your normal work for no extra pay - AND obtaining any budget for hardware is diificult-to-impossible."
I would agree, Bill (and I suspect others would also) misunderstand that "take a look" does not mean "fix it", it actually means spend 15 minutes taking a look and report back on what you've found. So for example having discovered that the WSUS box had a list of 25,000 patches awaiting approval or declination BEFORE it would download them, rather than simply log his findings and ask why this might be so and determine if this was having any impact on the live systems (by impact I mean included in these patches are fixes to problems that were currently having to be manually worked around). Adopting this approach, it is obvious that the only problem (listed in the article) that really demanded Bill to take action was the poorly implemented back-up that was filling up the Exchange Server file system; which as many know will cause Exchange problems and give grief to its users...
>> you have no idea of how many hours he had allocated to this task, or of what budget was assigned."
> No, but we do know his timebox was 1 month with no opportunity for extension and from Bill's complaint about having to be the last one out each evening,
I wasn't talking about Bill.
IT disasters don't usually just happen overnight. This one was years in the making and Bill compounded it but the root cause (poor management) hasn't been addressed, so it will keep unfolding.
Taking over someone else's IT house of horrors is always going to get to a point where you start shaking your head slowly, your eyes open wider and wider with the deep breath that you take in. Then you exhale whispering "oh my f**king $deity".
Been there, done that. Luckily though it does seem that at least private businesses have some sort of IT plan in place, due to the simple fact that "if the IT breaks down..... WE ALL BREAK DOWN" (or the business is financially kippered).
I've had times where I've been working on contracts in IT for public services within the UK.
NEVER, NEVER, EVER AGAIN.
Patches not being installed? Pah, that is the LEAST of your worries (well... nightmares to come).
Where shall I start? Insecure networks, desktops running XP with minimal security (including anti-virus), hardly any web security, user names and passwords for confidential databases being given out by IT to non - IT literate users, all manner of hardware sat there knackered with stickers saying "reported to IT on DD/MM - reference number abc123" ..... and these stickers being 3 or 4 months previously, and hardware hasn't been touched. File servers having no AD authentication so basically, if you can guess the server and folder / structure then you WILL have access to whatever files are stored (and all servers being fully visible on the network and not locked down)
Dare I go on.....?
All I can say is if anyone is considering a contract in ANY of the UK's public services IT systems, whatever the contract rate is.... well, I'd say at least double it.
...it's that shit's gonna happen, so just collect your salary and forget about the whole patchy-updates thing. The dude who didn't do the patching for two years, was just being chilled and cool. No need to get righteous on him.
One company goes bust, another one comes along. One credit card gets Pat Butchered, then just use another one to pay Microsoft with. Just remember: you be changing things, and it's your ass gets busted when it all goes wrong, and the fan gets hit with the hot brownies.
Besides, people are usually pretty honest nowadays: software is usually written so that clicking OK and typing In your credit card number is pretty much always the right thing to do.
I'm working on a setup currently.....
3,000 staff with mobiles.
maybe upward of 50KM of cables at one remote site.
network built using HP office connect 16's' because they were cheap, with TP-link & recent linksys
most wireless network kit so old it is using security cracked in 2008
2 server psu fires in the past soot covered the computers and it's never been cleaned
0 cleaning schedule in the server room for the last.......... 10 years
0 network diagram.
main incoming network & telicom equipment comes in under a 3 phase 600v switch panel.
With network cables handily using the 3 phase distribution ducts because they were 'easily accessible'
and windows servers so old, it makes Floppy drives look like advanced alien technology....
"security cracked in 2008"
I use security 2015.. fixes all the of the problems with older versions. BTW FTW is a "security"?
The problem with a lot of system admins is they barely understand the machines or the software they are meant to be maintaining but think they are the dogs bollocks of everything. They rock up to a new place with their massive ego pre-installed and set about bitching about the last guys mistakes only to add their own misunderstandings to the pile. I've seen big head sys admins get very upset when a junior dev guy looks at their cabling and tells them "you realise you can't wire twisted pair in any old order as long as the order is the same on both ends because it uses differential signalling right?". Instead of taking the time to learn some background on the stuff you're meant to be looking after you stand around joking "oh my god it has floppy drives" as if that's some sort of measure of anything.
"He means WEP; try to keep up when adults are talking. For your homework, try replying to the post you're replying to, instead of the main article, if you can brush that massive chip off your shoulder."
WEP wasn't broken in 2008: https://en.wikipedia.org/wiki/Wired_Equivalent_Privacy#Security_details
Surely even sysadmins can use google/wikipedia?
And if he means WEP why didn't he just say WEP?
"WEP wasn't broken in 2008"
Yes, you're right. It wasn't broken in 2008 - Vulnerabilities go back to 2001. http://security.blogoverflow.com/2013/08/wifi-security-history-of-insecurities-in-wep-wpa-and-wpa2/
Google, sure, but Wikipedia isn't always your friend.
> ... but think they are the dogs bollocks of everything
Anon for obvious reasons ...
But then some of us know our limitations, but we can't get another job because we're honest.
The PHB who know f***-all has to decide between the complete bullshitter who is really convincing because he actually believes his own crap, and the honest guy who is happy to talk about the gaps in his knowledge. What's he to do ? Choose the guy who admits to not knowing everything about what the PHB needs him to know about, or choose the one who "clearly does know" all about it ?
Of course, the HR depts and agencies counter this "exaggeration of skills" by simply exaggerating the requirements. So those too honest to outright lie about what we can and can't do never get past the "didn't tick that box that says he's Sage level with 10 years experience in this 2 year old technology" stage.
Often the "speak your mind bluntly and honestly" trait that comes with some levels of Aspergers isn't helpful to career progression :-(
Me bitter ? Why should you think that.
At least now it's diagnosed, it's officially a disability - which has a few uses ;-)
That is a classic employer error. I'm sure that sort of situation is very common in the real world of work. I don't see how the employee is at fault if he was left completely unsupervised.
It does show that no security is possible on the internet. It all relies on goodwill. Even virus writers typically don't try to erase your hard drive. If they showed bad will most computers would be blanked every other week.
Actually, some do try to wipe your drive; back in 2009 the PC I was using for Uni/College work picked up something that deleted 50% of my media files and tried to use my HDD as a store for extreme porn.
My tutors didnt really believe me, until THEIR system went down with the same infection a week later - the way their system was set up meant they couldn't even use a photocopier for the next 2 months.
(NOT a BOFH, but lumbered with looking after a nursery schools PCs for the last 20 years for no extra pay).
'..Seems more like he deliberately set it up to plunge them into trouble if his employment ended...'
No. a more probable scenario is that as the 1001 little 'interventions' he carried out on a daily/weekly/monthly basis to get around whatever foibles the system had due to management indifference stopped happening, the system eventually attained the blessed state of fubar.
I give you, as an example, the classic 'remember to wipe logs off machine X weekly after parsing them as the bloody thing is 7 years overdue a disk upgrade...' post-it note I once found eventually on the top left hand corner of a monitor in a server room(cupboard), a parting gift from the previous incumbent.
What happened if you didn't wipe the logs you ask?, a fun cascade failure which rendered the network almost totally unusable in about 4 hours flat...
this is so normal. and giant chunks of our infrastructure...nuke plants to giant financials run like this. If anyone really thought about it they'd never sleep.
People should be surprised by competence more than they should expect it.
As long as the buildings aren't on actual fire, nobody above the 50th floor cares.
I was on a trading course a few years back and their internet connection went down.
Now ok, this was a smallish firm, but ALL of their money was made from these trading courses, which need an internet connection oddly enough.
Since I was on the course I offered to take a look - no promises. Turns out that their router was plugged in to a UPS which was plugged in to an extension lead and for some reason the UPS wasn't on. They probably had a power outage at the weekend and they had been running all morning from the batteries. Powered on the UPS and everything rattled back into life and they thought I was some kind of magician.
Considering the rats-nest of kit in the broom cupboard which was their IT room I recommended that they get someone in to sort it all out (they couldn't have afforded me :) ) - to the best of my knowledge it is still in the same configuration now.
I once worked at a new U.S. Government server facility that was too important to risk any service failures. Fortunately, the boxes served an isolated Guv network over continent-wide leased lines.
After 5 years, they were forced to upgrade because the old Operating Systems could not support any new hardware or software.
Fortunately, money was not a problem.
The Guv paid my company to build an entire suite of replacement hardware/software systems at our support factory, using Operating System software that was upgraded by 2 major versions. The new equipment was air-freighted to the remote location, and installed in parallel with the old reliable systems.
After the cutover, the new servers were backed up with online duplicates. All new patches and user software are installed for 2 months on the duplicate systems, to verify reliability before upgrading the main servers.
Eventually, the Guv scheduled downtimes up to 4 hours for migrations and new installations.
But they keep the previous systems ready to go live immediately if needed.
Will left the WSUS server “buzzing the 5 Mbps WAN link overnight” and arrived to find "users not happy that they had to restart their machines, and some had a few hundred updates to apply”. Some were even running Windows XP and had also been left unpatched.
Where I work he'd have been arriving to find he'd been sacked.
Nobody with an ounce of common sense would dispute the patches need applying and that there is a definite wisdom in taking the hit in one dollop. That being said, there has to a be a better way than taking out the staff for a day while hundreds of updates are applied.
Its just yet another tale that makes me long for an industry regulator. "Sorry Mr Lout, but your code sucks and you need to find another profession"
Boo hoo. The workstation has to be restarted to complete the update. And all those documents? You need to "save" each one with a file name. Not just leave them open, unsaved, and lock your PC every night.
Our PCs tend to decide when to reboot and give you 5 minutes warning if lucky, sometimes just 1. This isn't my policy and I think it goes too far, but it does work.
"If you see this image while scrolling down the page you have been visited by the CHAOS MONKEY. An impromptu reboot and good fortunes will come to you, but only if you save all your files in the next 20 seconds and loudly proclaim THANK YOU, BASED CHAOS MONKEY"
The last couple of times I've left documents for whoever replaces me, last job the document was about 30 pages long, detailed all the annoying recurrent problems with systems management wouldn't replace, service account information etc all stored in my boss's safe for the next sucker. The time before that I Worked in a school and the hand over document was knocking on for 100 pages, I tried to keep it as short as possible with but with so many different versions of software (I covered 9 schools) it was impossible.
Both times I've had e-mails from my replacement thanking me for doing it, one said it was a bit of a reality check but that having all the common fixes sitting there saved a ton of time especially as at neither job did I have any sort of knowledge base product, it was all done on spreadsheets by myself and my techs, both of whole were utterly useless.
As others have noted Bill's role was to provide one month's support cover, handover to someone else and walkout himself. In this situation Bill should of focused on 'fixing' problems as reported and creating a "welcome pack" and a systems update/refresh plan for his successor ...
Yeah. Bill really did a lot of un-necessary pooch screwing there, and probably made life harder for the next guy. Some problems he caused will likely only show up after the next full timer is aboard too.
For example, some software refuses to operate with various Windows OS patches applied - such as when MS changes crypto defaults. If that software happens to be critical to the company and run less than once a month - eg (financial) batch processing, run once at the end of each quarter... the next guy is going to be blamed for Bills fuck-up of blindly applying everything. :(
If you are there only for a month in a care taking capacity then tbh, best dealing only with immediate fires and then documenting the rest of the issues.
The documentation will help the next person and will hopefully allow them to persuade management to make changes.
It would actually cost a lot less to do it less bad and it would make it a management concern at the correct level, rather than allow surrounding top-management with informal management who try to only assign menial level tasks and dodge any responsibility.
When you are just one guy, tackling stuff like this is career suicide. The rule is: the last person to touch the system gets the blame for all the problems. Don't fix things for idiots, because they will ignore all the benefits and only look at the problems (or even make them up).
They guy who recommended documenting everything had the right idea. Idiots usually love paper. If nothing gets done, it's not your problem. Move on. They have screwed up. They will screw up again. Just don't get any on you.
Yep, you've got to play their game.
Get it all down on paper, run a risk analysis including all budget requirements and externalisations the company has created over the years and get it under the nose of the bean counters and execs. If you want to be flash, follow this up with a HLD doc, showing how, if *you* were the shot caller, it could be sorted out with various plans/costs clearly shown. Never make an actual change without CAB authorisation and approved rollback plans.
due diligence mate.
They love that shit.
but that would take you out of the office for at least 3 months or so every few years, cost more than your salary and make you highly employable elsewhere.
And for god sakes keep passwords in the safe. But dont let anyone know the combination!
Just because someone doesn't do their job properly it doesn't justify smacking them in the face or kicking them in the head as someone mentioned in the comments. I've worked with some annoying people in my time but that didn't justify physically attacking them. If someone worked for me with that attitude they'd get the sack PDQ! Let's leave all this pseudo macho school yard bravado crap behind. I mean how are we going to encourage more women to work in IT if they have to worry about this type of nonsense. El Reg, you should be ashamed of yourselves for this article, even if it is tongue in cheek!
"If someone worked for me with that attitude they'd get the sack PDQ! "
You mean if someone actually cared about sorting out your systems properly that they would feel frustrated enough to express this kind of sentiment you would sack them? In that case you are probably the person who hired the initial fuckwit who left the mess, you obviously can't tell the difference between venting and a real life threat.
"I mean how are we going to encourage more women to work in IT if they have to worry about this type of nonsense."
Why would a woman worry about this type of thing? They probably wouldn't have left such a mess behind in the first place if my experience of women in IT is anything to go by. Cowgirls they are not.
"El Reg, you should be ashamed of yourselves for this article, even if it is tongue in cheek!"
To steal from your own thread title, grow up.
You mean if someone actually cared about sorting out your systems properly that they would feel frustrated enough to express this kind of sentiment you would sack them?
No, I think he/she means that if anyone (such as Bill) came in and turned previously working machines into non-working ones for many staff x multiple hours - when he/she could have scheduled things to not impact staff - they'd be in trouble. The "attitude" bit there is that Bill thinks his "applying patches" (which have sat unapplied for ~4 years) is more time critical/important than the work other staff do.
In my experience (so far), in most workplaces he/she would be in trouble. And I tend to think they should be - circumstances depending of course, as things aren't always black and white.
Oh, I thought they were referring to the desire to give the previous techie a smack in the chops for creating such a mess, perfectly understandable in my book.
I agree that the path chosen to fix it all was a bit immature, but then if he was more experienced he wouldn't be a sole techie in a small company in the first place most likely :)
The article and posts above explain more eloquently than I ever could why I gave up being a sysadmin.
But working on bids I came across a potential client for an outsourcing of their ICT, and I can only say that I doubt they would have let us do the job properly. Their security manual (all of it) was on a server in their internet facing DMZ, even though it was marked 'sensitive', and the rules said no sensitive information was allowed in the DMZ, their head of secureity insisted it was ok beacuse the server was partitioned and they had had it tested once and the tester couldn't get access to the partition.
They admitted to, on average, two level 1 incidents a week (yes, that is two incidents which prevented most of their staff from doing their jobs, with no work around, every week).
I could only hope that their staff subverted and ignored the security instructions so as to do their work securely.
I advised my bid team to walk away from that one, but we bid anyway, and lost. The 'winners' walked out after a couple of months.
On other clients, it is essential to remember that the Director with responsibility for IT has a day job, and his (usually his, rarely her) prime objective with the IT budget is to minimise it, and not let anyone turn anything off for even half an hour. After all his/her Rolls only needs servicing once a year, so why should IT need anything more? IT exists to replace director's lost laptops, not to whinge about upgrading the Windows boxes (whatever they are, do they put flowers in them?). The quality of the tea served in the boardroom is far more important than that.
Hello, and welcome to Monday morning all!
Perfect article to start the week off Simon, just felt the urge to point out if weren't for the "idiot boy"'s and "Wild Bill"'s of the world many of us would be without gainful employment!
"Bill trusted Windows and left it to do the job. He then rebooted and... nothing happened"
Fantastic! one problem begets another and frankly people should punch themselves in the face if
vendor patches and updates are thought to be reliable any great percentage of the time. For most of the world computers and networks are mere tools and a means to an end, when was the last time you saw polished hammer or nail gun (with safety still intact) stored in a flight case with a maintenance schedule and updated documentation? <SiC>
Overworked, underpaid, misunderstood neglected and disrespected.. job security for those that
love a challenge in seeing a disaster area restored to a reasonably secure and functiong state.
Work hard play hard and know when and how to relax mate, means more business in the future and less chance of the sheeple and techknownaught's causing a heart attach or BOFH episode.
I remember one small company (although the name evades me after so many years an such a short period I worked there so is probably a good thing) as an admin. They had a NT box where all the networked PC's data was held. It was connected via a switch that could not handle the load and a shit load of ethernet cables with no labels in such a knotted mess it was impossible to trace which cable went where. They went under the floor to various rooms in the building (No nicely placed wall sockets). They also went to hubs (also under the floor) where they got split to various PC's which users complained were 'slow' (Hmm. I wonder why that could have been).
The NT box held a ton of porn (mostly saved by management I might add) and the server was running low on space on the RAID array (I deleted it all and then had to deal with management in my ear about doing so. I kid you not). The backups were made to tape in the most erratic manner. None were tested and the labeling was a nightmare. I spent a VERY frustrating evening one day trying to rebuild the array after one of the hard disks died with the 'concern' that the backups might not work if it could not be rebuilt (or another disk died during the rebuild). They had an ancient database system that was 'proprietary' and coded by someone who had long retired and the temp IT guy they had been using only knew small parts of the system. I was made 'redundant' once I got totally fedup with the mess and went to management with my diagnosis that the company was about to go tits up if I was not allowed to spend some decent cash on fixing it.
I wonder if they are still in business. Actually no I don't.
I worked at one place where their idea of a secure server was to chain it to the rack with a huge chain and a cheap padlock . . The same place sacked the nightshift on Christmas Eve which led to them locking out every single manager/team leader login bar one on the system as they discovered on 28th December when the skeleton shift arrived for work, of course the helpline (outsourced to Romania) was not available as they were on holiday. I happened to know that login so was able to save 20 people having to stay around doing nothing all day (No login - no access to the work schedule and no ability to print procedures or do any other paperwork, no QA/Team Leader would be able to sign off work either).
So I can quite understand why the original admin did what he did, mission creep happens everywhere and unless you are careful you wind up doing 2 or more peoples work for less pay. I was once on a project, that after a few months they decided to give to another department. There were things I knew and were not documented that I did not pass on (not documented because the image team couldn`t be arsed to update procedures) The contract was cancelled a few months later because they couldn`t get the systems out of the door fast enough.
Biting the hand that feeds IT © 1998–2019