So did, the PHB get the blame for his terrible decision, or did the "blamestorming" point the finger at the unfortunate techies? Sadly, I suspect it's the latter.
A pint for a good story though.
Welcome once more to Who, Me?, where El Reg readers share their IT catastrophes. And it doesn't get much more catastrophic than this week's story from "Marty". At the time of the incident, Marty was working in the trenches at a financial institution. "When I was first employed, the rack-mount servers for our division were …
I was running a company in the US that handled sales and support for a UK company - we were all set to attend a show in New Orleans, we're paid for the exhibit space, shipped the show supplies and bought plane tickets - we would have been meeting potential new customers and providing support for existing customers... two days before the show I was told that we would not be going because the UK boss wanted to buy a laser printer instead. This was over 20 years ago - Laser Printers were new on the market and expensive.
This was over 20 years ago - Laser Printers were new on the market and expensive.
Your time dilation is strong. You could get a Canon LBP4 for maybe £500-£600 in around 1996. And that's over 20 years ago. It's depressing when you're thinking "I dunno - 15 years maybe?" and it turns out to be 25...
Also, the LBP4 was slow as hell.
You could get a Canon LBP4 for maybe £500-£600 in around 1996
I remember it well. Bought my LBP4 (also re-badged as an HP LJ4 I think?) circa 1991 for £1,200, though that did include an extra 2MB of memory and (and this was the key point) a Computer Concepts LaserDirect card for my Acorn A310 (+4MB).
Didn't make the 4 pages per minute any faster, but the time-to-first-page was lightning quick by the standards of a normal parallel port connection.
And then the podule backplane
Already had the backplane (four slots!) because I also had a handheld scanner (Watford Electronics), and I'd upgraded to MEMC1a and 4MB RAM by then too. I think I also had a Watford IDE interface (I certainly had an IDE disc, though maybe that came after the printer). I had a lot of money invested in that machine. A few years later the scanner and the printer transferred to a RiscPC.
The LBP-4 itself is currently sitting in the pile of things that still worked last time I used them, but for which I have no use now. I don't want to chuck it, but finding someone who wants it...
For the last 12 years or so I've been using a Xerox Phaser (solid ink) 8560. Fantastic printer, networked so the RiscPC can still use it. Takes £400 to restock the ink for 3,000 pages. Alternatively I could buy a brand new Lexmark colour Laser printer with 3,000 pages of toner and save £140. Ridiculous.
When those solid ink printers first came out I worked for a large urban school district where the new director of IT decided that all printers should be networked Xerox Phaser (solid ink, i.e., fancy, expensive square crayons), shared among numerous classrooms. He wanted to make his mark.
He didn't even bother having teachers test them in the schools before buying them. So, what could go wrong?
As you may or may not know, teachers like to print things out (like worksheets, tests, etc.), and then either write on them, or have students write on them. The problem is that a ball-point pen, and sometimes even pencils, cannot write over the wax "ink." Oy! These printers turned out to be quite an issue.
Oh, did I say that they were color printers - and teachers love to print in color! The only problem was that the District decided that the "ink" was too expensive, so they rationed it, and the paper. That left the teachers without tests and worksheets. You can just imagine. (We tried to sneak some old HPs back onto the network.)
"When those solid ink printers first came out I worked for a large urban school district where the new director of IT decided that all printers should be networked Xerox Phaser (solid ink, i.e., fancy, expensive square crayons), shared among numerous classrooms. He wanted to make his mark."
So he wanted to make his mark with crayons? How many months ago did he graduate from pre-school?
solid ink, i.e., fancy, expensive square crayons
Compare them with original laser toner and they aren't dreadful - about £100 for 3,000 pages (colour) or half that for black compares well with many manufacturer costs. The big advantage is the ability to add another crayon at any time, without really interrupting printing, and that the only waste is a very small cardboard box (recyclable in the normal bin) and a small plastic pot not unlike a child's yoghurt pot (recyclable). Compare that with most laser printers where there's a big plastic contraption with cogs and springs and (in some cases) the imaging drum, which can only be recycled by sending it back to the manufacturer.
With regard to teachers, it's an absolute doddle to add ink, so less chance for wastage or breakage.
ball-point pen, and sometimes even pencils, cannot write over the wax "ink."
It's also a bit hit-and-miss to laminate solid ink printouts. Some laminators are just a little too hot and colours will change as the printout passes through the machine!
On the other hand, that slightly-raised feel, particularly on high quality paper, does lend a certain "class" to printouts, especially letters and invitations :-)
"Your time dilation is strong. You could get a Canon LBP4 for maybe £500-£600 in around 1996. And that's over 20 years ago. It's depressing when you're thinking "I dunno - 15 years maybe?" and it turns out to be 25..."
Ahem maybe the boss bought it from a friend . wink wink nudge . Maybe the boss had a side company that was selling supplies to the company. Or just maybe he lied and spent the cash on some thing else.
This was over 20 years ago - Laser Printers were new on the market and expensive.
Your time dilation is strong.
Indeed. The first HP LaserJet model came out 35 years ago (which I admit is "over 20"). "Expensive" is subjective, but as a graduate student I bought a Lexmark laser printer in 1992, and I certainly didn't spend thousands of dollars on it.
I no longer have that printer (done in by a failing PSU after about 12 years of service), but I still use my early-1990s HP LaserJet 4.
> No switches... because 'manglement' decided "we don't need those"...
Imagine all that on 10Mb/s HUBS - because manglement decided - after being told in no uncertain terms that they had to sort it 2 days into the first term where noone could do anything - that switches were too expensive and it's only a student network.
Cue the entire thing going titsup when 36 students startup office simultaneously (and multiply by N classrooms all doing much the same thing at the same time).
Now connect that into the admin network (also hubbed) with no isolation between student and staff systems.
"Imagine all that on 10Mb/s HUBS"
HUBS? You're living in luxury there.
Mid '90s. College network. Everything, and I mean everything, wired in daisy-chain 10-base-2. Morning startups would often kill the network. The machines booted entirely off the network. Everything starting up Windows 3.11. They weren't smart enough to stagger switch-ons. And this isn't counting nefarious teenagers breaking the chain by unplugging one of the BNC connectors...
Of course, this is discussing the times the network actually worked and the server didn't fall over. Which it did, about a third of the time.
"We had to network computers with a pen n pencil"
You think you're making a joke ... My card punch went on the fritz occasionally ... manually punched cards were frowned upon, and manually punched tape was strictly taboo, but in a pinch they did the job for small stuff when t'brass weren't watching.
Related but different. I still have a piece of working gear using AUI with a 10bT adaptor plugged on it.
Its my Specialtix terminal server, serving up my test playpen's rack of serial ports over IP in a private closed network. Given they went under around that era and formed the basis for the venerable pearle cs9000 and other enterprise class terminal servers of later, its still quite useful. Saves me walking out the server room to get console on the switches and stuff that support terminals.
Security patches for the ip stack? yeah not so great. However its so old there's no actual published exploits for it, and anyone that knows one has probably died of old age. But its also why its on a closed network with no route out...
My university back in the 90s which was a small college had issues with students handing in work. Specifically there was no way of knowing when things were submitted and if it was before the deadline. A technological solution was proposed where each student would use their magstripe card to identify themselves. They'd swipe in before handing over projects and that would give a date and timestamp. They asked someone to write a program to do this and then with the beta version ran a test. The test proved that a lot more work was needed because because lecturers had to upload the assignments with deadlines onto the system. There was no means of checking what had been handed in though so it was open to abuse. If you had two assignments due with different dates for submission you could hand in something late. You just handed in the earlier assignment and said it was for the one due later.
So they had books of forms printed that were filled in at reception with your submission. Half of the form was then handed in with your work and the other half you kept as a receipt. I may have borrowed a book of forms at some point to help make submitting my work easier. I certainly never gave them out to people.
Brand-new high school building opened in January.
Brand-new windows (that didn't open) and exterior doors with good seals.
Brand-new HVAC systems pumping out plenty of heat, even in the largest areas.
Then April hit and no one knew how to switch the bloody thing over to air conditioning.
Two weeks of sweat and not one apology from the school board to the students and/or staff.
As for IT...
Fall 1997, same building.
Up to this point, the new network was doing just fine.
Then the school board and administrators rolled out a new computer-based attendance system for all classes (4 per day).
The new software was supposed to load students' photographs from last year's picture day (same as the student ID cards).
But no one thought to test the software before Day One to help get the photos pre-cached on their PCs.
It took a whole week before it stopped acting like a building-wide DoS attack when each class started.
We need to give manglement more credit for things they just don't understand....
Back in the last century I was tasked with scoping some new racks and servers as part of an upgrade. Knowing full well some of the items/budget would get knocked back, it was standard practice to add some "sacrificial" items to the list. I think we needed 5 switches, so I put 10 on the list. If we got all 10 then we had some spare or to quietly upgrade other parts of the network. Similarly, I added "wall mount cabinets for above switches" which we could either scarifice as costing saving, or we got some new racks to go in other rooms/wiring cupboards if it was approved.
Anyway, I submitted said list/proposal, then was out of the office for a few days whilst manglement and beancounters had their meetings. A week or two later I was rather surpsied to find everything had been approved, except for all the switches and wall cabinets. Well, a server room with no network switches is not much use, so I started to investigate and ask questions.
Took a while, but eventually I found that one senior manager decided that wall mounted switches for the servers were not really considered cost effective as each server had its own on/off switch already built in......
Took a while, but eventually I found that one senior manager decided that wall mounted switches for the servers were not really considered cost effective as each server had its own on/off switch already built in......
Copies of this story should be left lying about in boardrooms everywhere (with an explanation of why this is dumb-fuck behaviour, because dumb-fucks can't figure it out for themselves). (Similar to Blinkenlights signs on mainframes).
No switches... because 'manglement' decided "we don't need those"...
Not very long ago I installed over 60 PoE devices on an office - cameras, access points, VoIP phones.
The mid-management ordered as many PoE injectors instead of buying a couple PoE switches because some money was saved. Of course all the required power outlets, power supplies and extra cabling took half of the cabinet space, looks really amateurish (and it is), and - to put it mildly - is unpleasant to manage.
The mid-management ordered as many PoE injectors instead of buying a couple PoE switches because some money was saved.
I once had to install PoE-powered access points in a couple of locations. The first few I was supplied with 802af-compliant injectors and splitters, and all was good. The next couple I got issued with what turned out to be cheapies that just put 15VDC on the spare wire pairs (this was all 100BT), and of course that didn't work out so well with the more remote APs. A 20m run was OK, between 20 and 25 was a bit hit and miss, anything over 25m the AP collapsed in a gibbering heap the moment its radios got powered up if it started at all. Multiple meetings with a team of beancounters ensued, with the final one ending with a threat to strangle any of them that dared change the approved component list, using the cable length for the APs that wouldn't work. After stuffing the cruddy pseudo-PoE gear (nasty square metal boxes and power bricks with conventional transformers) into their posterior orifice.
Also, at one location one of the APs was to be positioned right on top of the equipment rack so using PoE would be quite superfluous; I could just as well have plugged the original AP power brick into the socket that the PoE injector would be plugged in, but the technical nitwit overseeing the project denied that change. The first site visit after the acceptance inspection the PoE setup for that AP miraculously morphed into the more sensible layout.
Ah what you should have done was to put the PoE injectors at the endpoints rather than in the cabinets. It's unlikely that management would realise that you could have them at either end and even if they did you can always say that the extra power load would be far too much to have all in one place and they need to be spread out more.
"Sorry we cannot connect all these IP cameras up as there's no power socket for them and yes I know the injector right next to your phones is annoying and unsightly but, you-know, power!"
Ah what you should have done was to put the PoE injectors at the endpoints rather than in the cabinets.
The APs were often to be installed in ventilation shafts, broom cupboards and such, where a power socket would either be nonexistent or prone to being reused by cleaners and the like.
Many years ago when working as a software developer I was at a company where spending money on equipment that wasn't going to be used by the owner of the company or wasn't going to make an immediate profit was an arduous work in long term persuasion. The development PC I used, and the network connectivity and sever that it relied upon, was so slow that compiling (building) the application literally took 10 minutes. This was at a time when frequently it was important to perform a full build of all linked files rather than just the modified one as this tended to make software debugging, well, reliable.
In the end the plan was simple: Whenever I was compiling the application I was to look as expensively bored as possible having already exhausted all available trivial tasks. I had already peeled off all the labels from the floppy disks that we (re)used to distribute software therefore when asked if I had anything to do I was in a position to state that I'd already peeled a (large) pile of disk labels and could produce them physically as evidence, in the meantime I was waiting for the applicaiton to compile. As in I had performed a ridiculously menial task (it had to be done, and I've always pitched in with things, so I didn't care about doing this) and to make it rather clear that this wasn't a great usage of my time.
It took about two weeks, and from memory half of the first week was spent peeling floppy disk labels with the remainder of the time compiling as often as possible in the hope that one of the owners of the company walked past while I was (im)patiently waiting. I got a new PC, we didn't get a new server for another year (which is a different story altogether as it was a reconditioned unit that we had as a result of an insurance claim by a client) but at least when the files were on the local system I could compile in seconds rather than minutes.
On the other hand, having a snail slow development PC did teach one to code efficiently (glares at almost every developer out there) and to think about code a little more - had plenty of time to do so, of course.
First proper engineering job there was a report run every morning on the slowest laptop (old 486) in the engineering department. The report took about 20 minutes to run and the reason it was ran on that machine and not one of the new P90s was that the button to run it was pressed at exactly the same time that the day shift took their break for breakfast. Cue mug of tea and a full English every morning.
It was still running on the same laptop by the time I switched jobs about 2 years later :)
When at said places don't let your PFY colleagues on Helldesk near your machine when away on holiday. I came back to work and found in the spot where the beige tower was supposed to be it wasn't there, then I got told a fellow PFY blew the pc trying to overclock it.
So I went from a PIII on Windows XP to a PII on Windows XP (With Visuals off wasn't to much slower than the previous machine) as there wasn't any spare hardware in the Special Reserve building (Reminds me the Hubs were also above the roof tiles there as well).
To be fair the company was bankrupt the year after.
I think it was Special Reserve that I bought the Star Trek: The Next Generation - A Final Unity game from and then entered (and won) the competition to win the entirety of TNG on VHS!
They changed the rules slightly after that as I had put as many postcards in an envelope as would fit within the first class weight limit of the time to maximise my chances of winning :-)
I used one of those along with 20 other engineers and things could get slow - a 3 minute jobbie would take half an hour or more. Until the day I discovered that a program I had would crash and leave me in whatever superuser debugging mode and then I could lift the priority of my batch job to 2 below max (any more and the OS hung) and my job would be done in 3 minutes and the system managers never found out why everything else ground to a halt.
I never did it on bigger jobs because they meant trips to the really good library we had there to further my knowledge of obscure computing ephemera.
>> Slow reports and breakfast breaks
Ah, the collision between technology and internal politics and procedures. Too often manglement ignore these things. You can almost be sure that a new accounting/auditing system will attract the attention of whoever is filching supplies or cooking the books. Some would say it's worth faking such a system in order to flush out anybody who is on the fiddle. Just look for whoever is sneaking in a sledge hammer....
And the rest.
Back in my first job (late 1970s) was working in a time-sharing bureau (sort of cloud on a single machine) which also had an application development department, and had a colleague developing an early client/server booking application for a holiday camp company. Grand programmer, but not in favour of writing code in separate compilable modules - umpteen thousand lines in a single COBOL program generally took the best part of overnight to compile! Clearly, he's not happy and I guess neither were management or the client. As in those days it was quite normal for the source code of the system programs (including the COBOL compiler) to be distributed, I spent some time profiling how the compiler was working and concluded that the bulk of the time was being spent sorting and re-sorting (virtually every time a new overlay of the compiler was brought into memory) the symbol table for the being compiled code.
Looking at the coding of the sort routine, I saw that it was the most basic sort possible (Order n^2 or worse) - so looked out my recently discarded University textbooks and re-coded it using a much more efficient algorithm in what I thought was really neat machine code exploiting the machine architecture in some "interesting" ways. Compile time for the application dropped from several hours to circa 20 minutes. Still not brilliant (breaking up into modules was really required) but at least several compile runs became possible each work session, rather than the one or two previously.
I did submit the revised compiler code to the manufacturer (Digital Equipment) as a "bug"(sorry, Software Performance Report) but I was never sure if it was incorporated into the production compiler as I never ran across such daft sizes of application design in the remaining ten or so years I continued working with DECsystem-10/20s.
10 minutes? Pah! I raise you Data General Model 30 compiling Fortran for up to four hours. Trying to look busy during that time was challenging. Let's just say that the devil finds work for idle hands!
So, exactly how much pr0n could you view on a Model 30 ? Or did it take that long to generate one ascii image?
25 years ago. Slackware 1.0. I decided to see if recompiling the kernel for my hardware was really all that worthwhile. Keep in mind that I was evaluating Linux (and Slack) on a spare 386SX16. 8megs of RAM and a 16meg swap file should be plenty, right? 27 hours later ... a reboot into the new kernel, run a couple speed test scripts and ::drumroll:: I managed to speed up the system about 3.5% :-)
Was several years before I bothered recompiling the kernel at home again.
 Main (Intel) machine was a 486DX2 with a bank-breaking 16megs, dual booting BSD and Coherent. Slack soon joined the pair, and rapidly became my OS of choice. Still is.
"So, exactly how much pr0n could you view on a Model 30 ? Or did it take that long to generate one ascii image?"
Ha! This was in the days of the stash of a certain type of magazine in a colleague's desk, some of which featured a young lady who used to be a secretary in our department.
10 minutes! That's a blink of an eye!
I've worked on projects where it's taken a whole half hour to build and deploy a project to a dev environment. But it was ok! My tech savvy line manager told me to make as few mistakes as possible to reduce debugging time.
Worked at a company on a *huge* C++ app where one of my good mates was working on the 'main module' for the system. Over the years, the 'main module' turned into a dumping ground for everything which didn't have an obvious home (or if the devs were too lazy to figure it out). Inspite of having the fastest hardware money could buy, linking the main module took over 20 *minutes*! My mate could only try about two dozen changes a day. He did a *lot* of web surfing.
Actually I'm typing this at work, on a PC sporting a sticker proudly proclaiming its "Pentium inside".... and 4Gb RAM and a 32 bit version of Windows.... complex tasks (like highlighting a cell in Excel or switching between Outlook and Notepad++) make the PC thing twice or thrice before moving.... ahh, these are the good days!
Our company has a policy of renewing laptops every 3-5 years and installing Office of them whether you want it or not. Great, you'd think.
Now try asking for developer tools. You must ask your current project manager, who of course sees no reason to do this. Why should his budget pay for something that you'll still be using after you've left this project and are assigned to another?
Static code analyser? IDE? You're lucky if the VM you've been assigned to do build work on a) is not shared between 100 other VMs on the same server and b) has the full set of standard UNIX commands.
And you have to use C V Fucking S for the code repository. I don't know whether to be thankful it's not SCCS or cry.
In case anyone's confused I'm not waxing lyrical about 20 years ago. I'm talking here, now, in 2018.
If it helps, I'm a Programme Manager (Project Manager squared) and I'd fund that.
The amount of deliveries that have fallen on their arse because of this kind of thinking.... Perhaps that's how I got demoted and now actually have to deal with the PHBs as the Developers' Champion?
I always try and make sure my devs have the appropriate tools for the project and will go to bat if need be.Sometimes even the PM is at the mercy of the PHB's I took over a project many years ago the week TOAD became a commercial product. Just my luck that it needed to be licensed separately for the farcical price of £1500 per copy. My poor devs were given brand new top of the range machines but were forced to use the Oracle development set and vt100 terminal emulators.
I did have a huge bust up with a dev team manager once when I was a tech support manager. He got hold of an evaluation copy of the COBOL compiler for my IBM VM mainframe (*1 off offer 30 day license) because he appointed a contractor who only wrote COBOL (in 1990 when were were a SQL developer shop). He had the contractor in for 4 weeks developed the code and de-installed the compiler. Then it failed User Acceptance Testing. Unfortunately for him I wasn't willing to pick up the £30,000 PA cost of the compiler licence for one module in one app and one of the other devs had to reverse engineer it in Oracle.
And you have to use C V Fucking S for the code repository. I don't know whether to be thankful it's not SCCS or cry.
Fnarrrh - Try using Rational Clearcase with Rational Team Concert. Before diving into the void, one sniffed a generous lump of wasabi in the morning and the rest of the day would be somewhat better than that.
10 minutes? Oh, right, others have long-since pooh-poohed that.
Anyone else remember the era when to build gcc, you would bootstrap a skeleton gcc using whatever native or other cc you could find, then use that gcc in a second pass to build the real thing?
My first encounter with gcc, I had to run the first pass with Sun's bundled cc. Then go back over reams of error messages and retry. Iterate quite a few times before I have a working install. And each build wasn't ten minutes, it was an overnight job.
About 30 years ago - when I had "mathematician" in my job title - I played my part in shortening some long jobs, by coming up with better algorithms. A mathematical model to calculate and plot coverage maps, down from over 4 hours to under a minute. Bootstrapping a tracking device (predecessor to satnav), down from 45 minutes minimum to about 90 seconds average to acquire a fix from cold.
 Silly question in this forum: of course half of you remember that!
Taking another detour from waiting for compiling...
I left a modern laptop running over night in the office yesterday, coz -
It hadn't finished installing the June Windows 10 update yet, coz -
No one had left it turned on for long enough since June, coz -
By my estimate, it was gonna take 15 hours to complete the update, having already downloaded it when I checked it yesterday morning. Maybe longer, I'll check it on Thursday.
"Anyone else remember the era when to build gcc, you would bootstrap a skeleton gcc using whatever native or other cc you could find, then use that gcc in a second pass to build the real thing?"
Remember? I'll be doing that later today. An embedded system I'm responsible for uses Aboriginal Linux, where as an "air-lock" step, you first build gcc to build a host gcc (moving from "whatever compiler you happen to have laying around" to "we know what this compiler is, we just built it"), that is then used to build a target gcc, that is used to build the rest of the OS in a VM (in my case, a 486 VM). Linux From Scratch does something similar, you first build a gcc that is used to build a target gcc, that is then used to build the rest of Linux from Scratch in a chroot. Apparently Linux From Scratch was the inspiration for that process in Aboriginal Linux.
There was the time our company upgraded our NetWare 2.01 (running on a '286) all the way up to NetWare 2.15. That seemingly small jump (I don't think the 3.x series was out yet) turned a decently-performing network into a SLUG. A slug on a sub-zero January morning.
I managed to memorize the key sequences needed to log-into my system in the morning (depending on whether I had to run an EDI pull at the time). I would boot my machine (which loaded it's basic boot from floppy, no HDDs here) and type in as much of my morning login procedures into the keyboard buffer as possible, then go get my morning coffee. Then go back and pick up at whatever point it had ended up at. Entering customer orders was horrible, you could watch the order entry screen redraw line by line. Fortunately the boss accepted the point that the server HAD to be upgraded (a shiny-new '386, woo-hoo!).
Of course, running on ArcNet, there was only so fast the system was ever going to be (ethernet still being wildly expensive at the time), fortunately RealWorld accounting was text-only at the time.
This sounds terribly familiar. I worked for a life insurance company that used a similar setup. The computer room had a bunch of '386s for production, but developers got the old '286s. The network was ArcNet and there was a special network concentrator on the back wall of the computer room.
Their client server system was a bunch-o-Novell servers running TriMark software, aka Magic PC database with the HDD-less clients connected to it.
In 2010 I was working for a certain large software company in a suburb of Seattle, and my manager didn't care that the PC the company had provided to me didn't work.
"That's the machine we bought for you, that's the one you have to use."
So after months of suffering random hardware failures (losing several disks and having to reinstall my development environment many times) and making no progress with IT or my manager, I finally just went out and spent my own money on a very nice developer class machine. (I won't get into the shitty software designs the manager forced me to adopt -- I'd been programming computers since before this guy had been *born*, and he was an idiot).
I moved on to another, better project and much better managers. Took the PC home, since it was mine.
Sweet, sweet revenge a couple years later, when I got a call from Google asking about that manager. He'd been interviewing and had used me as a reference. That was a fun ten minutes.
I'm ASTONISHED at your story because (a) they let an outside piece of hardware onto the network and (b... and something I have experience in) once you bring your own hardware it becomes property of the company. My new pc-from-home became the bosses' new PC. Fortunately that company is in my past.
a.) Sometimes it allows you to be more productive. Good developers know more than System Administrators, that is why they are paid more. It is not like the silly guy from sales bringing his malware-infested, 5 years out of date Windows PC without AV, full of pr0n ....
b.) Since you fell for this, I understand why you are mumbling about a.)
Do you happen to administer a Windows network ?
10 minutes? Pah! I just wrote a compiler which takes *17* minutes to compile a one-line 'Hello, World!' program, and I have the video to prove it:
Admittedly, it is running on a BBC Micro. (See http://cowlark.com/cowgol/ for the main project page.)
That is nothing. Back in the 90s a few weeks after I took over as IT manager where none of the PCs had been changed for years,
One manager came in and demanded a new PC for the new starter he had hired and hadn't advised IT. He demanded a PC NOW. I said I haven't got anything suitable.
Look he said you have some in a pile over there. I explained the pile was the ones I had replaced as the slowest in the building then raided for memory to get the other ones to the giddy heights of 1 MB (I think it was a 286 or early 386sx when were putting out Pentium IIs. I said that if they wanted the staff member to resign immediately then sure I could build one of those. Personally I would recommend we order a new one. Don't be difficult I was told.
2 days later I had installed windows & office from the network (I had discovered network install meant I didn't need to swap 40 floppies so I loved it).
The machine was pronounced "ready", the manager watched it boot windows for 10 minutes complaining all the way. Twenty minutes later I had a signed purchase order.
sometimes you just have to let them figure it out for themselves.
... who had a boss who refused to pay for training. And then said boss yelled at the staff who couldn't support the thing the training was for. I've seen it zillions of times. You'd think that somewhere in management school they'd point out that that trick never works.
Ah, well. I get more loot picking up the pieces, I guess. But what a waste.
This round's on me.
Experienced that one. Some appalling "Enterprise Service Bus" thing that comes with zero documentation - the vendor makes a stack of money from training courses. You don't even really program the thing, it's some process diagram driven thing with cryptic icons representing processes that you pass around typeless collections of key value pairs to.
Reminds me of what happened here a couple of years ago.
I'm a 20-year veteran, so have been there and done that on most things. But we have a new tool-type which is different to our existing run of the mill stuff (I work for a semiconductor manufacturing tool vendor), and I was asked to support it. Also as background I'm a certified trainer on the older tool types.
So get trotted of around the globe for a week's training on aforesaid tool. All very nice and jolly, except I got back home to an email proudly congratulating me on now being a certified trainer for that new tool type too.
Yup, after a grand total of a week's hand-on with the new tool, I was expected to (and indeed actually had to) train both colleagues and customers on them. Shall we say the first couple of courses were "interesting", but at least they sharpened up my skills at winging it and educated guesswork...
Ah yes, experienced this recently.
It was brought up in a regional meeting that I wasn't HP Gen-9 Server certified.
When asked why, I told them that it had been decided (higher up) that it could not be justified to take me out of the field for 5 days of courses considering how few servers I repair since I am primarily a printer specialist.
Doesn't stop them sending me out on warranty repair calls for Gen-9 servers and then complaining that HP won't reimburse the labour...
In a school where I worked I found a locked room and got one of the caretakers to open it up and saw a dusty room filled old PCs that looked like they have never been used. Turned out very soon after the school got a suite of brand new computers in a properly wired network, some kid stuck a knife into the cabling conduit. They survived this unscathed but the school didn't have the money to get the contractors back in to do the repairs so the room was locked up (for context I dealt with some contractors (probably not the same ones) and it would be between £50-£250 for a new plug outside the spec they were working from and that was only because they were already onsite and had the stuff sitting in their van). Am guessing some were scavenged for individual classrooms but what a waste.
Worked for a POS (Point Of Sale, but the other meaning was also appropriate) software developer back in 2015. The owner had only installed two phones in the whole building that we worked in - one on his desk and the other for the first line support team to share. Sales and anyone at a customer site were expected to use their personal mobile phones.
The development machines were a bunch of clapped out machines cobbled together from off the shelf parts. The worst thing was that going on site meant lugging a battered tower case, LCD screen, keyboard and mouse along. Great impression to give the customers - although they were pretty clueless or else they wouldn't have bought the crap POS system in the first place.
The owner was also a control freak and would only pay things by cheques that only he was authorised to sign. That included our pay. On one occasion he went to sail his yacht around the Caribbean leaving us unpaid for several weeks after the date we were supposed to be.
All that and the regular bawling outs that the boss gave people were enough to convince me to change job.
Not actually IT related, but in a previous career (before I saw sense, packed myself off to Uni and changed to being a technician, I was doing general admin work for a local hospital (now demolished). The staff and visitor canteen needed a refit, as it had not been refitted since the early 70s (this was the early 90s), so the hospital got a company in to do it. They produced a lovely plan, showing the canteen (which had been the hospital chapel in a previous era) with beautifully positioned concealed lights which really did an excellent job of highlighting the extremely intricate detailing on the original ceiling that the fitters in the 70s had just slapped a false ceiling on. It did look absolutely stunning. The trouble is, the manager did not want to pay the thousands the company wanted to add a proper glass entrance hall, so asked them to remove it, then got a local fitter in to install a home conservatory. As a result, the door (because the home conservatory wasn't designed to stand up to hundreds of people walking though it) was out of action for repairs more than it is worked.
Also, in my current job, we had a room fitted out as a small studio. We spent tens of thousands of pounds on proper, good quality studio lighting, then my boss had to cut the cost of the project. So, he left the lights, and asked the installers to remove the computerised control system we'd asked for. The only control we had over those lights was the on/off switch on the wall, and whatever controls the lights offered on the control panel on the back of the light (assuming there was one). He also asked them to remove the control booth that was supposed to be at the back of the room, and most of the wiring. So, any users had to borrow equipment from us, or provide their own, and the only thing they had to record on on the room was a PC or Mac we provided..
I'm afraid that this sort of thing is not uncommon in the wonderful world of academia. Grants often include the purchase of big-ticket equipment, services or whatever, but the smaller, routine items come from a general overhead account or even aren't budgeted at all. Training is the obvious one. I was fortunate and worked for an organization that recognized the necessity of training, but you could find that you'd got the money to buy a vastly costly piece of equipment and then struggle to get it installed! We often found ourselves moving heavy, awkward equipment using members of our (highly skilled) team. Fortunately, H&S abolished that activity, as it was recognized that using untrained and ill-equipped personnel to move heavy equipment wasn't the safest of things!
I had the opposite experience. Working for a charity, a nice company (OK, it was Autodesk and I bet you haven't heard them described as "nice" before) gifted us a copy of Autocad (last DOS version). They also threw in three days training for two of us. However... the charity wouldn't buy a machine to run it on. Cue sudden panic when the head of Autodesk UK decided to visit to see how their gift was being used. Which is how I ended up with a 386 and an A2 pen-plotter almost overnight. Of course, in the intervening 6 months my memory of the training was a little shaky. As a side issue, rendering and plotting whole very large buildings took a looooong time on the equipment mentioned - 24 hours wasn't unheard of.
"I had the opposite experience. Working for a charity,"
Same here, I do volunteer work for a charity that looks after seniors, basically as the onsite IT guy, helping seniors with their technology. They survive on donations and grants. The charity has existed for a very long time (1948 if I recall correctly). The top executive positions are elected positions, and they have a high turnover of volunteers. When I started early last year, I was given a small office that had a variety of computer hardware, most of it supplied through grants, some of it purchased, some of it I have no idea where it came from. None of the computers had been updated for years, since that was the last time they had an IT volunteer. The paid for IT support company only works on the office computer systems, not the donated freebies used for training. There is stuff in there that every one forgot about.
Often I'm asked to find low cost solutions to their IT problems, coz they just don't have the money for more expensive solutions. So far I have managed to solve all but one of their problems by either re-purposing old equipment they know about that was being unused, or finding stuff that had been hiding in a cupboard for years, but solved their problem. The one exception was their need for a Chromecast, to hook up to the projector, to demonstrate Android stuff to a bunch of people, some of which have bad eyesight and can't see the details on a small phone screen. I had initially been using my own, but eventually had them purchase one. I suspect they had one before, or was borrowing one before, as some of their office computers had the software for it. I just couldn't find the old one anywhere.
Maybe the opposite problem for me, the boss HAD paid for triple PSU redundancy on a legacy comms switch many years ago which supported a Critical National Infrastructure service. Power loading meant that the two shelf rack of cards could run on just one PSU so having three should have meant near 100% uptime.
Enter stage left engineer (err, me) who when moving some cables (the old heavy shielded RS-232 type) let one fall from height miraculously tripping all three power switches which were located in a row at the bottom of the rack. It took 10-15 mins to get the rack back online but for most customers the legacy serial connections were spread across two racks which even I didn't manage to break.
I 'fessed up to the mistake and thankfully management supported me, albeit change control processes were tightened.
I had a similar problem with an HP blade chassis. It had four redundant PSUs, and given the load, should have been able to run on any two of them. I was moving some power cables around to make our rack a little bit neater, and after double checking that all of the other PSU's were online, I pulled the power out of one of them.
The entire blade chassis died straight away, with me left standing there wondering WTF?
After some investigation we discovered that one of the PSU's in the machine was faulty, and couldn't actually sustain any load. Most of the time this wasn't an issue, because the other PSUs took all of the load, but me pulling the power to a good PSU, put load on the bad one which immediately died, taking everything else with it.
Fortunately at that job spare cash wasn't so hard to find, so we bought the full complement of six (I think) PSUs for that chassis, just so it couldn't happen again.
I once worked for a large payroll company that when they moved to a new building under specified the power requirements for the server room, ups etc. The result was similar to the Op. No capacity for dual power supplies etc. They were also still running win 2k and 2003 last year (and earlier versions). Anonymous as they are still in business!
I have seen it often.
Many companies where I have worked operate on the 'budget protection' mindset where each manager jealously guards their own budget so that they look as efficient as possible.
We needed a backup generator and so the simplest option was to get a turnkey ISO container fitted generator. Cheap, fully tested and guaranteed to work but you pay for what you get.
Instead we bought a stand-alone generator, a cheap controller and a standard container to put it in because procurement boss wanted to save money. Getting it to work was down to commissioning team and their budget. The Genny was manual start and so needed to be modified to get the controller to control it and the container need modified.
The end of the long process was that even after a lot of work and a huge spend, much more than buying the turnkey ISO Genny, it worked after a fashion but could not spin up fast enough to do the job.
But the important thing to remember is that the procurement manager saved a lot of money from his budget even if the company lost a lot of money.
Sadly I see this approach to spending all the time, we had a document management system that used the cheapest server stack that would run the software and handle the number of users. £10K more and we would not have hundreds of engineers waiting 20 second for a search result. Multiply that by the number of searches in a day time day per year times a billable hourly rate of say £60 and the £10k pales into insignificance but the IT procurement manager saved some money, the engineers' time is another budget.
"Many companies where I have worked operate on the 'budget protection' mindset where each manager jealously guards their own budget so that they look as efficient as possible."
Ah, budgets. And what happens when different budgets fragment the ability to manage as a whole. I'm pretty sure it was lack of coordination between budget holders that resulted in the following sequence at Marylebone station years ago.
Station was repainted. Beautiful job. e.g. there was a bookstall handily placed between the gates to the various platform with a moulded frieze showing the sorts of things they sold, newspapers, books etc. and each individual object on that was individually painted. Must have cost a fortune. Painting budget.
The walls were sandblasted covering all the new paintwork with a coat of dust. Buildings budget.
Some of the tracks adjacent were filled in covering part of the sandblasted wall. Tracks budget.
The whole station entrance was reconfigured demolishing the carefully painted bookstall (which was replaced with a small, far less convenient cave-like space). Utter wanker's budget.
There appeared to have been no budget for running trains; every evening involved a long pause which I interpreted as being the time it took for them to find enough working DMUs to string together to form a train.
A company I worked for never bought us computers to develop on. Instead we were loaned computers bought in specifically for each project, and owned by the client. This meant every time you moved from one project to the other, you had to spend a day building, installing and configuring a computer to work on. This included swapping in and out hardware cards, connecting to the right network/server, attaching second monitor, installing your IDE, compiler from floppies, etc etc.
We got moved between projects a lot, as development cycles scaled up and down the number of bodies needed. And every time it we wasted days running about trying to get a functioning computer. Inevitably it meant that some computers ended up getting used on the wrong projects, simply to save the time and effort. This in turn meant running around swapping, hiding or reclaiming kit when clients visited, expecting to see their kit in use on their project.
The developer time wasted would have easily covered buying proper, permanent development machines.
Seductive she may well be, but dalliances often lead to her bastard lovechild, the Panic Purchase, whereby the hardware that originally should have been bought is also acquired, or contractors are brought in at eye watering rates, usually exceeding the cost of doing it properly in the first place.
Seductive she may well be, but dalliances often lead to her bastard lovechild, the Panic Purchase, whereby the hardware that originally should have been bought is also acquired, or contractors are brought in at eye watering rates, usually exceeding the cost of doing it properly in the first place.
Ah yes, IBM.
In the relatively fluid* mid-90s at my second or third major contract at a trading bank we had an overnight batch on an Intel server which took, on a good day, about 8 hours to run. On a bad day it took about 10 hours - which cut into the traders day by about an hour. Traders being raiders (this was an autocorrect but I like it so it's staying!) there was much shouting of abuse when this happened but the IT bosses would not fund a newer/better server so the support desk just had to put up with it.
Cue my impatience and a visit to the Head of Desk to have words about the abuse my team were being subjected to by his team. I explained the issue - that IT wouldn't shell out for a new server - and he asked how much one was and how quickly one could be obtained. About £4k, says I, and six weeks via regular supplier/order process or about 2 hours via Tottenham Court Road. He handed over his credit card and £20 for the taxis.
Problem solved - although I got some abuse from the IT bosses for the out-of-process solution.
*i.e. JFDI was not entirely frowned upon.
Reminds me of the mid 90's server at the place I first cut my teeth developing in. The machine was old enough that the original bios battery was dead... and directly soldered to said motherboard. We also had to turn it off every day 'to save on electricity'.... so yes.. About 30 mins every morning wasted trying to get it's setting rejigged before waiting for it to start up then finally setting the right time in the OS.
Another company had a batch of new servers for email but was too tight to invest in the new software so they sat idle for about a year and half before enough other things died (like UPS's) for new rack and UPS system. Servers ended up being used for a VMware cluster.
I was on a pre-sales team for a large software house on several occasions. The challenge was that I was not considered senior enough by job title to qualify for business cards. This meant that on several occasions I was in rooms full of expensive corporate suits writing my details on the back of their own cards. I was finally given business cards on my final week of my notice...
One customer had a great idea to give the "vendors" the most rubbish PC's in their building to work with. Every expense spared, although I am sure this is not unique. The machine I was given took 8 minutes from pushing the button to being usable (Win95 IIRC) at premium software house rates across our team we reckoned that it cost them far more in boot up (and reboot) time than the saving in devices in less than 6 weeks out of the year or so we slogged with these devices.
Many examples of shaving outsourcing deals and product procurements to uselessness. Our procurement teams have "saved" lots of money over time by not buying manuals, support, consultant expertise that then has be to be expensively contracted later... funny thing is they are still able to claim the savings...
Many, many moons ago I was the PFY at a large UK Accountancy firm.
We were rolling out Tax software to each office. They had been told what to purchase - Compaq Deskpro 286e for the clients, a 386/25 for the server, Netware 2.15, yadda yadda...
I get to one office to do the rollout (only about 5 PCs) to find no server or clients.
The Partner in charge of that office had decided in his own wisdom that he didn't want to pay the sort of prices involved, so had gone to an auction and bought a load of "PCs and servers" for next to nothing.
When I pointed out that what he had bought were a load of word processors, not PCs and didn't have DOS/Win 3.1 on them, and couldn't be networked he decided to throw a fit and call my BOFH to complain about my ability to "do a simple task".
Turns out his idea of "next to nothing" was about 40% of what he would have paid for the proper kit, which he ended up having to buy anyway. When I revisited the office a few weeks later, the Partner was nowhere to be seen.
So you have two redundant power buses, but only the cables to hook each machine up to one of them. Why wouldn't you hook half to Bus1 and half to Bus2? That way you're mitigating any risk to only half of the machines.
There may be mitigating circumstances not documented in the story, but we have all laughed at stories where both PSUs were connected to the same power bus.
As ever, Pratchett sums up the fallacy of penny pinching and small budgets to a tee (and applies to servers, PCs and indeed most project items just as much as boots)...
A really good pair of leather boots, the sort that would last years and years, cost fifty dollars. This was beyond his pocket and the most he could hope for was an affordable pair of boots costing ten dollars, which might with luck last a year or so before he would need to resort to makeshift cardboard insoles so as to prolong the moment of shelling out another ten dollars.
Therefore over a period of ten years, he might have paid out a hundred dollars on boots, twice as much as the man who could afford fifty dollars up front ten years before. And he would still have wet feet.
That's not economic injustice. That's piss-poor budgeting coupled with false savings. Consider how much money you spend over the course of a year on caffeinated fizzy sugar drinks ...
The boots? White's. Of course. Purchase the correct tool once ...
Not really. Most people can't afford (or couldn't afford, before the rise of mass production) stuff that was actually decent. Many people with low wages can't even hope to save that $10 over the life of the boots, let alone save up $50!
Never forget there are desperately poor people out there.
(Also, never forget that cos schools don't teach kids the basics of economics, many will never realise that they can "spend to save". Fortunately Wonga are now bust, too, so taking out a 5000% interest loan to buy those $50 boots, which wrecks the plan entirely, stops being an option.)
So, I worked at this startup where we were building for several different platforms. I was building on solaris, and they had me building with a networked drive that was SLOW. It took nearly 4 hours of an 8 hour day. I complained... and complained... and complained. A disk at the time was about $500 (1994) But would have paid for itself in just a week in terms of time. When they eventually DID get me a local drive to build with, builds were no more than a few minutes each day. So they saved $500 on a drive that year, but wasted half of my salary at the time (half of which would have been about $35000 US)
I have needed a piece of software and been told "sorry, cant justify it" or "OK submit a 3 page request and we will get back to you around Christmas". As long as it did not require subscription based support (no ongoing costs), one legendary boss used to say "book it to your credit card and claim overtime in the same amount". He's now a photographer, which is a real loss to IT.
We, society, grow by balancing yin/yang, risk/caution, saving/spending. We tend to remember the failures and for those few that warn of the cliff only to be driven over it, those memories will not go away.
But our memory is not a good way to judge what our level of concern should be. Worker pay and customer outages is easy to see in companies with many decades of records. I had the chance to do just that in a major company I worked for.
When it came to worker pay the peak was in the 1970's. Looking at pay stubs for positions that still exist today was most interesting. Hourly pay was higher (inflation adjusted of course) but not by much. What was very different was deductions. Total taxes deducted were under 20 percent even for those with maximum overtime. Some were less than 15%. Today all are over 30% and those with OT are over 40%, some more than 50%.
The same can be seen with other deductions. Government programs that cost a dollar or two back then today cost hundreds today, health deductions for insurance have seen similar increases and were among the many new deductions. The take home pay for some workers today was numerically the same, though there were only a few examples I found them most shocking.
Keep in mind that the 1970 worker was able to raise a family with 2.5 children in a detached house on that single income. Today the average household in that country has more than 2 incomes and far higher debt.
The minimum educational requirements where such that many teenagers were hired in 1970, Today those same positions require many extra years of schooling, so many extra years that it has been more than a decade since teenagers could be considered for those jobs.
Of course there has been obvious improvements. Safety trends mean that more workers today get to die at home and while notable safety improvements stopped more than a decade ago they are still far better than the 1970's.
Service interruptions in the 1970's seem high by todays standards but they were shorter and affected fewer customers. That company received awards for service but today struggles to meet the industry average.
That would seem to be an example of major penny pinching having paid off for the company and government. The demands on workers increased, deductions increased with pay reduced, the only obvious cost being moral, employee loyalty and a minor drop in quality. A major win for penny pinching and those that enabled it have all gotten promotions.
That is a penny pinching win unless we look at the company itself. In the 1970's it attracted the best workers, was flush with cash and growing. Today it is many billions in debt and many are predicting restructuring even bankruptcy.
There are many penny wise pound foolish stories from that company but IMO the most notable one is the pennies saved in wages resulting in the loss of the company itself.
Penny pinching is now required, except when it comes to pay for those higher in the company. A look at the decades of pay increases for the highest positions in that company shows that penny pinching means different things to different people. The loss of the company will hurt some more than others.
As I see it, the only way this could be considered a "Who, Me?" is if Marty hadn't gone to bat for the rails and cables. I don't expect non-technical supers to understand things like ventilation and redundant power. I absolutely do expect that they'd understand downtime and lost profits. Sometimes we need to help them understand the bigger picture... in a PowerPoint with pictures. If they don't get the hint after that, may Murphy have mercy on them.
This is a perfect example of "You get what you pay for"
In all honesty, you should have made it perfectly clear to TPTB that this is a fully unsupported scenario and that they are removing any and all redundancy for the sake of a few quid. Essentially it was your job to really push them to do it properly, point out the risks and when the shit hits the fan, and it will, you can take the email your penny pinching cretin of a boss sent you abstaining you from any and all responsibilities for when it went down, to his boss, get him fired, get his job, get a pay rise and then do it how it's meant to be done....
Wait, that's not just me is it :D WHOAHAHAHAHAHAHA
Ah yes. The number of IT managers who think that the dual PSUs are there in case one of them goes out is legendary. I was amazed at the number of dual PSU cords all plugged into the same power strip, or where the "left PSU" and "right PSU" power strips were plugged in to the top and bottom of the same outlet, or different outlets on the same breaker or power panel. It boggles the mind.
... with offices across the UK decides to roll out an Exchange network. I know nothing of Exchange so the basic setup is done for me. I start it up then setup mail accounts on it but notice 3 trivial things missing
No DLT to back up to.
We couldn’t have these because they were too expensive.
I went to see the commas director to get an idea of how small I could make the mailboxes. He said he didn’t want the mailbox size limiting as all the staff were adults and could manage mailbox sizes themselves. Nothing would change his mind.
The first thing to happen was my Exchange got a virus and chugged it out to all the other Exchanges. The management then agreed all Exchanges could have AV, and we were to try one out on a 3 month trial. When the software trial ran out of days I was instructed to remove it and reinstall it. I had to do this several times before they dug deep and bought a decent AV, which must have been an aberration on their part.
We eventually got the DLT and UPS, but what got them in the end was no-one deleted anything in their mailbox and it filled the information store. The only way to get access to all the mail was to upgrade to the Enterprise edition at some hideous expense, plus the downtime.
Odd that we couldn’t afford it as the company owner had a personal fortune of £60m at the time.
"Odd that we couldn’t afford it as the company owner had a personal fortune of £60m at the time."
How do you think he got a £60m fortune?
hint - not by spending any more money than he absolutely had to. And even then, only if forced, preferably at gunpoint....
Towards the end of the last century [sounds more dramatic than 22 years ago] we moved a project team with specific security requirements. Almost at the last minute I was asked about the server room - this was a windowless room [oversized cupboard] with no external walls and practically no ventilation - I asked what aircon was being provided?
Long story short, the management had either overlooked or severely skimped on this. I said that we should add up the power requirements stated on the labels on the back of the servers, switches... and use that as a guide - after all, every Watt of electricity in was a Watt of heat out [OK minus a bit for LED lights on the front of boxes and spinning discs]. The finance director (obviously a well trained installer!!) said no and booked in a small unit suited to a home.
When the servers started shutting down (the project was running at almost £1M/day costs) there was a sudden outcry. The home sized aircon struggled bravely but dripped water (yes it was mounted on the ceiling above the servers) - water with horrible snot like algae.
The company paid for external contractors to come in an give a quote. Guess how they sized up the needs?
As I was one of the sign-offs for starting their work, I took the opportunity of attaching my e-mail trail to the paperwork before it went to the board. The director got off OK but at least I wasn't blamed. :)
About 2 weeks of running inefficiently (and taking months off the life of the servers); over the top costs for fixing the issue when the room was full of kit [rather than doing it when empty] plus time spent for expensive staff to take turns to escort outsiders into the server room and watch over them... all because "a home unit will do!"
I took over as systems manager for a very small company just as they were moving (they were having the new offices fitted out, so it was an empty shell when I started)
The previous systems manager had helped plan out the building, and the "Server Room", was just a big cupboard with no outside walls, no ventilation, etc...
I looked at it, said we needed a much larger room, with a window, and aircon - got everything except the aircon - had to make do with a home unit sitting on a shelf (with drip bucket underneath), vented to the outside, an open window, and leaving the door open during the day.
I did persuade them to double up the number of power and network ports in the small technical area just outside the server room, where myself and my colleague worked.
As far as I know, that's how they are still running.
Once, long ago, in the darkness of a friday night I was faced with an initialization process after the 5 node cluster came up, and whilst I was looking at logs, I noted that the bundle of software that had been laboriously unpacked and distributed across the cluster was a) using about 48% more memory than it typically had prior, and while the daemons were starting up they were dumping debugging output to the logs instead of "INIT_START, INIT_DONE RUNNING".
I quietly pointed out to the release manager that we'd received a debug set from the QA team and we'd best either roll back or have QA do up a run without the debug. QA reported that it would take between 20 and 38 hours to recompile the application suite.
Y'all just *SPOILED* buggers with your 10 minute compiles.......
I can recall a particularly messy "3-node cluster" (that actually had at least seven nodes) that took three days to successfully restart. Turns out that years of upgrading bit-by-bit had resulted in an overall system that had no valid cold boot-order: I believe most of those three days were spent working out what disks depended on each other, and working out how to bootstrap the whole monstrosity into life.
Anyway, that's why my lot only do storage maintenance work on bank holiday weekends.
Have done similar for a financial institution, they'd paid a fortune for me and a small team to disconnect some rather tasty Alpha servers on ones side of the country, then I was to follow them across the country and reconnect them in another data centre.
Unfortunately the power budget hadn't been upgraded to support the few extra kilowatts of disks, servers and various associated ancillaries.
So, the whole aisle went black on first power up of the newly located kit.
One of modern Britain's rare industrial success stories is Games Workshop, who still operate a factory in Nottingham. This last year they've done crazy well, 39% sales growth over last year, so they've gotten in more moulding machines so they can make more plastic soldiers to sell to nerds around the world.
...and the local power grid won't support the extra load. Welp.
I was witness to the "Calibration Equipment Department Tech" putting his foot (Very painfully - He fractured it if I recall) into a filing cabinet (Which didn't feel a thing) out of sheer frustration at the conclusion of a phone call, talking to Bracknell\Reading purchasing clerks, who were waiting for a fraction of a penny drop on a certain transistor for a product line before they put in a super large bulk order.
By extension his urgent equipment repair (using the same 20p transistor) used to build & test another product line could go f**k itself, despite the entire assembly line staff & testing techs were stood around happily twiddling their thumbs, which he was expressly forbidden from ordering from RS\Farnell off his own back.
Back in 2003 I worked for a company that will remain nameless. They built a fancy new building in downtown Detroit with a backup generator for the data center inside the parking deck. When we got hit by the big east coast power failure of 2003 we transferred over without a hitch. About 10 minutes later it went dead silent in the data center. The crew that painted the new parking deck also painted the motorized cooling vents for the generator room shut. It went over temp in the room and the generator crashed.
I once worked for a hosting company whose racks and servers were your worst nightmare, so much so that the engineers in the DC would not even touch them.
Cables everywhere, all tangled up, you could not tell which power cable or ethernet cable was for which server, and you couldn't touch anything without accidentally pulling out another cable.
Tracing a cable to a switch port was almost impossible.
Servers with dual power supplies used split power cables. So if the fuse went in the plug, both PSU's still went off. Some split cables were used to power 2 servers.
If a PSU died, there were no spares, so the on-call IT guy was sent to PC world to pickup a desktop PSU, which was then retrofitted into the rack server. Meaning the case was then left open with a PSU sticking out the top. So the 1U server now took 4U of space.
more than 50% of the servers were actually not even in use. They were retired servers from ex-customers who had since left, but were never removed from the DC. Most of them were still left online for years draining power and running unpatched Windows OS. The company actually got fined every single month by the DC for exceeding power quota.
The Firewall was an EOL product with software that was outdated and unsupported for several years.
The main router was so old and outdated that it couldn't cope with packet sizes or the IP tables requirements. Which could have been solved which a cheap memory upgrade.
How this company stayed in business for so long was a complete mystery. The only reason they had customers is that the accounting was just as bad. Most customers were undercharged, and in a large number of cases, not being charged at all. Hundreds of dead domain names still being renewed each year, which should have been cancelled long ago, with no record of who they ever belonged to.
Seems that medical equipment is literally older than Moses (tm).
Because it has to be.
That MRI suite for example, has 4 (!) very expen$ive full length graphics cards.
Unfortunately they are also so old that the software wouldn't run on anything other than W2K
and to make it work would break certification so the old dinosaur still chugs along albeit with a new power supply, dual UPS and special expensive flat screen (magnetic field!)
HDDs had to be mu-metal shielded as well, not a standard item at all.
Guess what happens when someone plugs a MP3 player with a virus into the linked machine, which also has an airgap and "DO NOT USE USB PORTS" sign. FFS! (facepalm)
Fortunately the hideously expen$ive software was fully backed up to CD-R so this was annoying but not as bad as being ransomwared.
Someone took the USB ports out after that.
AC, because security.
Warranty denied. I worked for a company that was a third part that service of certified HP kit. This guy complained that we would not replace his HDD( those were not covered as the server was sold with out HDD) but demanded we replace the raid card as it must be the reason why his HDD were failing . This ass clown bought 6 500gb HDD from ebay and stuck it in the server . This server was running exchange on it .
Haha. I have heard of folks buying *used* drives off Ebay for DR purposes before sending a drive off for "proper" recovery (frowned upon by official DR companies) but this is a whole new level of assclown.
In fact I am in the meerkat for a used ST9750420AS if someone has a spare, this particular one seems to have a board fail. 9RT14G-500 0001SDM6 Spins but no activity, no clicks, nothing.
Has some important data on it but mostly just web browsing. I'd like to get it back one day if possible but to be honest its not worth £600+ though may one day be handy for verifying my physics notes circa 2015 preceded official discovery by about three years.
Very fortunately it was backed up to a mirror so was merely an annoyance.
When I was working for a large electrical manufacturing company in the Midlands, I was once a member of a team that was developing a Motor Control Centre for all-electric ships. We were a loose mix of electrical and mechanical engineers, and I was detailed (amongst other functions) to keep an eye on the budget. I could write and sign for requisitions up to £1000, my immediate boss up to £5000, and the Project Leader up to £10,000. One day it became necessary to order a very large, very special, electric motor to act as a surrogate load for the equipment during testing (think 8 feet diameter by 10 feet long, weighing several tons). We approached various other companies, including our fellow engineers in a different division of our own company, and the cheapest quote we received was for a quarter of a million pounds, but no-one was willing to take responsibility for spending that much money, so, as I was the most junior and therefore the least irreplaceable, I wrote and signed the Purchase Order myself. Project Leader was summoned to Head Office and asked to explain why I, a mere Development Engineer, had been allowed to sign the PO. Project Leader points out that, unless we have this motor, all development on the equipment will stop, and the Navy's shiny new Destroyer will be just a floating hotel. We got the motor tout suite.
...that an already INSTALLED 100BaseT infrastructure in a new build be downgraded to 10BaseT because she thought the higher speed would result in a higher total cost of ownership. No ship. The techs just said, "uh... yeah, sure! We did exactly what you said." Mumble mumble.
I can't remember the details, but ooo, 18? years ago, I was newly working at a local authority. After several weeks an IT chappie came around to set up computer support, log-ons and stuff. All I can remember is that it went so wrong so quickly that I stormed off in angry frustration and barged into the office of the leader of the council and shouted at him.
When they were introduced in 2009, smart meters were supposed to simplify the billing process and ensure readings were up to date and accurate. But the "first generation" smart meters that the utilities bought in the millions and are going to fit till they run out, are in now fitted in seven million households and guess what they are currently incompatible with a new national communications network and the roll-out has been plagued with problems, no shit Sherlock!!!!
Let me list some of them:
Smart meters make it harder to switch gas and electricity providers!!
Smart meters don't bring an end to estimated bills !! (or billing errors)
Smart meters won't work if you have a poor mobile signal in your area!!
The display units linked to smart meters are crude and difficult to understand!!
There's little evidence so far that smart meters will save energy - or money!!
Smart meters 'pose security and other risks'!!
Better still they may even result in your home/business premieres burning down, as reported by an investigation by BBC Watchdog, which aired in late July, that raised questions about the links between smart meters and fires. It was not clear whether the meters themselves, or their installation, was at fault.
What ever happens in the end the consumers in the U.K. will all be paying for this ginormous monster fuckup with no option to opt out !!!
"Smart meters don't bring an end to estimated bills !! (or billing errors)"
I recently moved house, after living for almost 12 years in a place where the owner paid the first $100 of each quarters electricity bill per resident, and we paid the rest. So I didn't deal with the electricity company directly. I did note they had the old style meters.
At the new place, the meters are digital. I dunno yet if they are smart meters. I now get billed monthly, and I've only been here long enough to have gotten one bill. I was asked to read the meter myself, I asked them "Are these smart meters? Don't you read them remotely? Do I really have to read the meter myself?" The answer was that they are smart meters, that will be read remotely, and I don't have to do anything other than pay the bill. The bill that turned up was "estimated". I'll actually read it myself next time, see what difference that makes. And I'll look up the brand and model number of the meter, see if they are supposed to be smart meters.
If in doubt, assume a trivial oversight in the code.
If I had to guess, I'd say that the meters are read automagically, but they don't tick the flags in the database that the call center's form does when they type in the reading you sent them.
When you think about the math or saving rails, each person would have a cost of about $100 (about twice the salary of $50 because of office space, lights, computer, and such.) per hour. Lets say it only took about five minutes (even though it would probably be more like fifteen to an hour).
Now that would be about $8/person for that five minute period. The weight of the servers would probably require four people. So this means that *each* server lift would be at least $32 worth of time. Do that once per server ($35 is what I see they approximately cost for new rails) and the rails could have paid for themselves.
Now, if I just do the math alone, I would be saving money by *having* rails just in man power alone!
I went through this phase, pardon the purchase order. Boss 1, rails and cables included without some ridiculous fitting, margin and another fucking new screw-driver and Boss 2, Bad Tings TM will happen, perhaps exaggerated with a concealed, It'll be me thrown over the side entropy, unless political gain is to be had and it could be a team effort factor included. #whatstheupside
N.B. I promise. One day. The mystery of the wrong coloured network cable. #truestory
A previous employer had a real problem with budgeting and planning. As well as some truly awful management types, the worst and most abrasive of which happened to be the owners daughter. There was a whole lot of emphasis on title and how many people you supervised.
When I got hired on the IT department was about 10 people, over the course of a few years people left for better employment but never got replaced.
Eventually it was down to just me(network/systems), my PFY(helpdesk), IT supervisor, and the IT director(who literally did nothing, ever). Budget cuts hit, and the PFY got let go, and couple months later the IT supervisor quit, so I PFY got rehired and the director absorbed the supervisor role. Then the director got fired and replaced by.... The owners daughter! Who had no knowledge or experience in IT but still micromanaged everything. So now we were down to two.
Now mind you this was 9 figure company, running everything on the backs of two people, refusing to hire more people, buy hardware, or even pay for overtime, so needless to say a lot of things didn't get done, corners were cut.
During this time hardware was aging out, I kept things running best I could, but there's only so much you can do with aging, over allocated equipment. I built proposals, sent multiple warnings, in a multiple department meeting, I even went so far as to announce that we were in beyond a critically dangerous place. All institutional knowledge was with one person (me), and if I got hit by a bus or moved on, they were screwed. Backups weren't functioning correctly, hardware was starting to fail, all equipment was past EOL. They kept promising to hire someone, buy new hardware etc. etc. but never did.
So I started looking around, and found a new job a couple weeks later with better pay, better company, better conditions. So I turned in my two week notice and bounced before the ship went down.
They ended up hiring a consulting firm to take over for me, paying 5x+ what they paid me, a couple months later the SAN failed, no proper backups of course and not monitored anymore cause consultants suck, they managed to recover most of the data and used disaster that to push new hardware purchases. Of course the new hardware was specced and bought through the consulting firm, so of course they marked everything up.
They ended up paying something like 5-6x over what I had specced out, for equipment that wasn't as good. All told, bad management and poor planning ended up costing the company an easy million.
At least the first details were exactly the same as a financial business where I worked, but end details differ.
One of many similar stories there. Got to share war stories from others in the industry - all similar...
The incestuous movement of "industry experienced managers" between financial companies is the reason I will no longer work in that sector. It self-selects for for penny-pinching bastards, thus getting the low end dregs of the already stinking pile of human refuse that is churned out of business management schooling.
The single large financial firm I saw that had their IT operation "ducks in a row" (very quality IT operation actually), had a corporate policy of cheating the customers at every turn. You have heard of them :(
* Also will not work in "government" sector
AC - but a pint for those still slogging away trying to keep those shit holes running
I worked for a small company back in the mid-1990s that developed CBTs.
We had been compiling a training program series and it was time to burn the master disc. The process took about 45 minutes to burn a disc since this was a P-90 and the CD-R was only 2x speed. Rather than wait around for the disc to burn, we would head out for lunch at the local pizza joint located across the parking-lot and come back in time to check the disk. This time, however, the disc had failed the burn about 90% of the way through due to a buffer under run issue.
I checked through the system, the hard drive was fine, I defragged it again, and there was no file sharing enabled. As I was checking the available hard drive space, the lights dimmed, and the system locked up. The dimming lights coincided directly with the A/C kicking on.
I mentioned to the boss that I found the problem and it's related to the power in the office, and we needed a UPS at least for this system.
The boss said that a UPS is a perk and we could live without it. The problem was we couldn't live without the UPS because the main development machine was glitching and rebooting randomly every time the air/con went on in the office. He wouldn't budge...
The boss got antsy and got on my case again about the failing discs. At $15 a pop back in the early 1990s, he was shelling out about $900 a week in CD-Rs for nothing because the failed discs were nothing but gold-colored coasters. I told him we need a UPS, but he still wouldn't shell out $350 bucks for one of those small APC units that weigh a ton and always have that flickering neon power switch.
After yet another failure, he decided I was doing something wrong and set out to burn a disc himself. He put his gold disc into the drive and started up the software. It was some special software we used then, and not anything like we have today. He hit the Burn button and he was so proud he could do it. :-)
Then it happened! At about 50% of the way through, the A/C kicked on and the lights dimmed. The disc continued to burn for a few seconds then a Buffer Under Run failure message showed up on the screen.
He didn't say anything, but the next day I had my "perk to protect the system and never had a problem afterwards.
Company I work for want you to reimburse them for training they send you on if you leave within a certain time period after the training.
Which leads to refusals to do training - e.g. management decide training is needed in X, many staff members know full well they will never actually use X in their work, or if they do, only to a limited extent, plus the course is expensive, has little open market worth (compared to cost) if changing job. So most staff refuse to go on the training due to risk of financial hit if they change job
Picture the scene major £5M 18month long building refurb so not too shabby. The whole building need to be powered down for a weekend, but before this happened a brief outage (20 mins TOPS) was required to give the switch gear a jolly good looking at. This involved removing a panel on the switch gear and looking to check what needed to be done.
The guy PM’ing the build paid us a visit in IT. “A bit of time has become available to do the power down we’d like to do it Friday 1700h”,
“ermmmmmm hold your horses there fella, best not do that kind of job on a Friday just in case it goes tits”,
“hmmmmmmm ok how about Monday”,
“Ok so we have generators available?”
“Nope its only going to be for 20mins TOPS, no work is being done its just a jolly good looking at the switch gear, and the generators were going to be a few grand so a bit expensive”
“Ermmmmmmm so no generators!? We really need the gen’s they’re for just in case it goes tits not to run the DC, our UPS will last about 50mins but kit will need to be powered down way before that as the CRAC’s don’t run off the UPS.
“sorry to expensive and it’ll only be 20mins TOPS”
Monday comes the users have all buggered off its 1700, and the power is still on, 1705, 1710, 1715, 1720 and OFF the power goes so 20 mins late already. Go in to the DC hang on lights are still on what’s up, well they had a little bit of bother throwing the main breaker so decided to pull a breaker for the rest of the building instead so the DC is still on supply. Okayyyyyy so why was it late, well the bloke doing the work couldn’t undo the screws holding the hatch on the switch gear he need access to! FFS the switch gear is vintage mid 70’s vintage and never touched the building is within spitting distance of the sea so bugger me its corroded together who would have thunk it! Maybe someone should have gone own and given them a turn before hand, but never mind.
Right well as we’re on supply we’ll hang around until the rest of the building is back on supply “just in case” I’ll pop to the plant room for a chat and see how things are going. Off I pop to the plant room everything is going fine. Speak to our sparky, yeah everything is fine had a spot of bother with the main breaker pulled the air brake and it didn’t trip, pushed the stop button and it still didn’t go. Well it is old electro\mechanical probably ceased up a bit, give it squirt with some WD40!
Power comes back 1740 everything is fine so off we go home.
Next day 0800……………….
Just abut to sit down at my desk coat coming off, user comes in, can’t login, hmmmmmm odd. Colleague and I go to the DC next door to the office. Open door, quiet, pretty quiet, out of the 20 racks about 80 were OFF!!!!!!!!!! WTF!!!!!!!!!! Well there’s your problem right there! Frantic get things powered on then find out WTF happened. Managed to get most stuff powered up over the course of the day, lost a few disks and a couple of old Linux boxes, the main hit was the hyper-v cluster who’s storage was in a pickle and none of the VM’s would come up! Manage to get them up but that would come back to haunt us later.
Track down the sparky and the sheep’ish PM. Turns out that the main airbreak breaker they tried to trip at 1700 but didn’t trip DID actually trip at about 1800!!!!! Being electro\mechanical it was gunged up with old grease and the sear got stuck, after an hour gravity won and it tripped!! Sparky, who’s not a switch gear expert then couldn’t get it to reengaged after frantic calls to a mate of his he suggested, giving it a squirt of WD40!!!!!! Which got it working only trouble was this took over 1.5 hours way past the 50 mins runtime of the UPS! Result being lots of kit didn’t shut down properly. The hyper-v cluster one of them,although it should have shutdown it was held open by a buggy backup agent so didn’t shutdown when the UPS software told it too.
The knock on lasted weeks the hyper-v storage got corrupted and that took ages to fix, backups which backed up to a remote site want to replicate 15Tb of data across our 1Gb line so we landed up having to ship the replica from the offsite and replicate over our LAN. All for the sake of a generator hire for 1 day! Which was a few hundred £
Anyway after this total cluster feck the main power outage a month later had 4 generators, 1 for the DC 1 for the rest of the building plus backups for both!
Biting the hand that feeds IT © 1998–2019