The NHS is a world leader in upgrading or removing EOL kit along with the rest of the government departments.
Financial firms have admitted they don't upgrade or remove end-of-life kit fast enough, can't identify all staff dealing with critical data, and don't maintain a comprehensive list of partners with system access. Financial institutions are under increasing pressure from the sector watchdog, MPs and the public to improve their …
As an ex-permie for an Accountancy firm, ex-contractor for a Bank and now a permie in the Automotive sector I can say from experience that one of the UK's Automotives is far, far worse. Mechanical / Electrical Engineers do not know Enterprise IT and just create a mess. They are currently trying to work out how to push software updates over-the-air and refuse to see the similarity to doing the same to smartphones so are re-inventing the wheel.
Of all the employers I have worked for the only one that 'Does' IT properly was one of the Big 4.
Anon, natch...to protect the guilty!
Financial firms have admitted they don't upgrade or remove end-of-life kit fast enough
I remember helping out a friend that had just started his own small computer shop back in the early 2000s. A person from a local bank came asking for us a try to fix a machine what wouldn't boot correctly said it was a critical system. I said no worries and they presented me with an 8086
I heard stories of mainframes from the 70s and 80s still running in banks until the 00s because they did critical functions and noone would bite the bullet to replace them.
But really I don't think any industry is really very good at this. Surprisingly you might find your local council is better than most. We were clobbered by GCHQ a few years ago for having EOL kit and it was one of the few red lines we had to deal with. So I can safely say that my own local authority has no EOL kit at all (after shouting at the right people who had hidden XP and NT kit under desks) and quietly confident that most others are in a similar situation.
Not so mainframe hardware is under constant development and newer faster shinier machines are released every thre or four years.
Shiney enough that most Banks replace thier production kit every five years or so.
An obsessive commitment to backward compatibility means in most case it just a case of wheeling out the old machine and plugging in the new one.
Software is a different matter the above mentioned obsession with backward compatibility means you can happily run software written in the seventies.
The worst offenders in the obsolete hardware stakes are Windows servers, systems stuck on obsolete operating systems and out of support databases because of the expense and risk involved in each upgrade.
Although my personal best was in 1999 I saw a pdp-11 running a production mail server. It was due to be replaced because no Y2K fix was available. So it went to a museum after 25 years service.
Here we were making good strides in clearing EOL kit and getting everything to within a couple of point releases from current versions.
Then it was decided that it would be more efficient if data centres were Prod-only or Test-only since in DR all the primary Prod systems were brought up on the test kit.
Except for the systems which ran in clustered setup across the data centres such as the mail system or a whole host of other small systems where the test setup was a fraction of the size of the Prod setup since you only ever made changes to small elements at any one time. The solution for those systems was to move all the Prod into one data centre and lose the high availabilty and DR capability so when the Prod datacentre goes down there is not enough hardware in test for it to run.
A few years ago a Canadian company advertised for a support contract to maintain a pdp8. That was controlling a nuclear reactor.
Second reaction (the first is obvious) "hmm, that is a machine that is comprehensible, well engineered and designed to be maintainable. Perhaps its best not to replace it"
I well remember a conversation with a financial sector IT worker. We were upgrading the code on our Nexus 5k and his comment was along the lines of "we haven't gotten through testing the previous version yet". I asked how long it took them to go through acceptance testing and his answer was "about 3 years". When I asked why so long the response was "we like low risk upgrades where all possible failure modes are known and corrective measures tested and documented".
"There is a significant risk that vulnerabilities of unsupported assets are not identified and fixed in a timely way,"
"They reported offering only ad hoc training, and had problems identifying and managing high-risk staff that dealt with critical and sensitive data – even when they did know who was high-risk, only 47 per cent provided extra training."
While it seems terrible, and it is, this problem existed before IT was a thing hmmm, & no surprise it still exists now, woohoo !
IT's no help though, there is never going to be much trust in IT even with Quantum Computing unless you limit who has what and can connect to which, where...
Biting the hand that feeds IT © 1998–2019