You missed one of the 'T's
Take the opportunity ;)
I became a Solaris system administrator in the 1990s: first proper job out of university. I read a lot about the Morris Worm – believed to be the first of its type, and of interest to me because the Sun-3 kit I looked after was vulnerable. Not long after, I was asked to take part in a radio interview about the "scary" new …
Remember the Y2K bug? Remember that it DID NOT bring down all the systems int he world and send us back to the Dark Ages?
Seriously - this is because a lot of people put alot of effort into analysing and dealing with the risk, just as you day in this article.
I worked in the NHS just prior to Y2K, and we had a very diligent person in our Department who went around collecting information on every system, including the HPUX workstations and networkign switches I was responsible for. That person collected manufacturer statements on Y2K compatibility for all thes e systems, and returned the information to the Trust. This was mirrored in all Departments as far as I saw.
The upshot - things were ready for Y2K and the world did not come down round our ears, starting in Sydney.
I moved to the Post-Production world in Soho abotu that time, and again we had someone who was employed to go round collecting the Y2K information and manufacturers statements from all our kit.
We were asked to go in on 1 Jan, and had a happy day and a bit of overtime for doing nothig much - as we were prepared.
I guess a lot of shops used Y2K as an excuse to junk obsolescent equipment.
I guess also its easy to say "Well they should have set a hard end date on Windows XP kit" -I know its not as easy as that in real life.
"Seriously - this is because a lot of people put alot of effort into analysing and dealing with the risk, just as you day in this article."
Have to disagree there. It was mostly because way too many people seriously over-hyped the actual risks and made it look like the end of the world while those risks were in fact minimal if not non-existent.
And why? Simple, because there was a lot of good money to made with y2k.
"Have to disagree there. It was mostly because way too many people seriously over-hyped the actual risks and made it look like the end of the world while those risks were in fact minimal if not non-existent."
Actually, the reason those risks appeared minimal or non-existent is precisely BECAUSE a lot of people were paid a lot of money for a long time prior to the millennium in order to fix it. Huge chunks of the banking, education, healthcare and other sectors noticed big problems with the bug prior to the big event; my mother worked for a major educational establishment which literally re-wrote their entire student enrollment systems because of it, since they couldn't input their 4-year students starting in 1997.
Looking back, it's seen as a big fake scare in the popular imagination, but vast amounts of code were re-written in the decade leading up to it to prevent massive disruptions to very, very large areas of the world's major industrialized economies. Planes might not have fallen out of the sky, but it wouldn't have mattered since no-one would have been able to withdraw cash to pay for a flight.
"Actually, the reason those risks appeared minimal or non-existent is precisely BECAUSE a lot of people were paid a lot of money for a long time prior to the millennium in order to fix it."
I hope you do realize that a majority of software relied on the underlying OS for their date calculation(s) and thus the only fixing required was the OS itself and not so much the applications.
There has been a lot of effort put into this, not denying that, but there has also been a lot of overrated effort put in for the sole reason of making money.
Plenty of functions still worked with the main issue of a wrong date showing. What most people are ignoring is that in most cases the whole system was affected. So "4 days from now" would still work even without a patch because both the OS and underlying software would still recognize 1900 + 3 days as just that: a time difference.
That's not saying there wasn't an issue, but it was hardly as intrusive as people claimed.
I hope you do realize that a majority of software relied on the underlying OS for their date calculation(s) and thus the only fixing required was the OS itself and not so much the applications.
The problems were usually not around calculating the date, but in file formats or other structures in which the year portion of the date was represented as a two digit (or two character) field, perhaps to save on storage. Hence, in such systems the only years which could be represented were (19)00 to (19)99.
Have to disagree there. It was mostly because way too many people seriously over-hyped the actual risks and made it look like the end of the world while those risks were in fact minimal if not non-existent.
Now you see, it's attitudes like that which are to blame for people today not upgrading from XP, and then getting hit by malware.
They hear the tech industry screaming at them to upgrade, but then they see that their systems are working fine, so they put their fingers in their ears and do their best to believe that it's all a lie -- that the industry is just trying to make more money by conning them into upgrading something that doesn't need it.
They're wrong of course, in the same way that you are wrong. Y2K was a major issue and it did take a lot of *real* effort to fix it. The fact that you're not seeing evidence of a disaster means that the actions taken were successful, not that they were unnecessary.
"Now you see, it's attitudes like that which are to blame for people today not upgrading from XP, and then getting hit by malware."
Or is it because tons of companies pay Microsoft to continue to support their XP systems and Microsoft complied to that (money talks after all) which is what some people see in their day to day live as well: they work with XP. So how bad can it be to keep that at home?
Money talks, as it always does.
Agreed. I was part of the team evaluating our organisation in 1999. We went round every single workstation, identified those that were compliant, those that needed BIOS updates, and those that needed scrapping. From memory at least 50% fell into one of those categories (and most of those were resolved by updating the BIOS). Yes we did a lot of work, and during the first week of 2000 nobody's system went down due to the Y2K bug. Whether if we hadn't done all that work, the same thing (i.e. nothing ) would have happened, I guess no-one will ever know.
In my experience though, this is all part and parcel of the inherent distrust that any organisation has for it's IT staff. If planes had started falling from the sky we would have got it in the next for not anticipating and fixing things. But when nothing happened, we were not congratulated because we did anticipate and fix things. Instead we were accused of faking the whole thing and blowing it out of proportion (even though it was the Media of the time that actually did that).
Nothing changes.
"those risks were in fact minimal if not non-existent."
I've trotted this one out a few times but it looks as if it has to be repeated. I had a client for whom I'd got new live and backup ready because the old ones (actually, the old backup server to be precise wouldn't run the Y2K-ready version of their application. We were all tested and ready to cut over between Xmas & New Year. Their beancounters refused to let us go ahead because they didn't want to take the risk!!! of migrating before they'd gone through their year-end closedown of the books.
So for a fortnight we had the application vendor logging in on about a daily basis, maybe more, maybe less, to fix the data corruption we kept getting. It wasn't, therefore, an absolute disaster - a pity as I'd have liked to have had to take them back to the end of December and make them re-input several days work - but I don't think you can count daily remote access to fix corrupt data as a long-term working solution.
Yes it was a real problem. Most people weren't that stupid so didn't get to see what could have happened.
And BTW however much money was to be made out of Y2K not much came may way - 99 was the slackest year I ever had.
"Have to disagree there. It was mostly because way too many people seriously over-hyped the actual risks and made it look like the end of the world while those risks were in fact minimal if not non-existent."
You clearly didn't see some of the crap that happened when we tested for Y2K issues.
With some systems it was just cosmetic, with others it was disastrous.
And then there were the problems unearthed by doing a proper systems audit, which were nothing to do with Y2K, but were problems just waiting to happen. We came across quite a lot of those.
I do indeed. In 1995 we set our new recruit the boring task of setting the clock calendar chip on our kit to just before midnight 1999, No problems. He then used his initiative and repeated the test for 2000, 2001, 2002 , 2003 at which point we were back in 1897! Was a bug in in a calendar library. Updated the library, issue fixed. Many of these bought for our persient apprentice.
I'd disagree at least on the choice of words. First, document the risk, then identify the likelihood and the impact, then identify the mitigation strategy, and then consider the cost benefit for those mitigation strategies (which is where you get to constraints).
I'd have to agree on the sentiment - risk management is not an IT responsibility although IT will often be blamed.
That thinking causes the biggest problems in clients that I work with. Documenting risks is seen as A Very Bad Thing. If you do not have a paper trail showing that a risk was known about in advance, there is no way to blame any individual or group if the risk actually becomes reality.
Often people are looking at the constraints and costs of mitigation first, decide they can't justify or afford it, but then have no way to safely document that decision - which as noted in the article is an 'Accept the risk' strategy. When the risk becomes reality and the impact actually occurs, the culture is to find someone to blame, especially by people outside the group responsible for risk management, rather than properly acknowledge, "OK, we all agreed this could happen, it was the right decision at the time, let's deal with the impact and move on."
Which is the point of this article - if you are responding to just the impact of WannaCrypt now because that was your properly planned risk mitigation strategy, then kudos. If not, deal with it but fix the risk management problem as well.
" First, document the risk, then identify the likelihood and the impact, then identify the mitigation strategy, and then consider the cost benefit for those mitigation strategies (which is where you get to constraints)."
And then email that information to literally everyone above you in the chain of command. And then take a copy of that email, and save it somewhere off-site where only you have access. Because when they ignore you completely and the shit hits the fan, you will need it.
Way back in, I think, 2008 the French gendarmerie did a risk and cost assessment and decided that MS and windows was the big risk (their machines were using XP at the time). By the end of 2013 they had converted all their machines, with the exception of a few that ran specialist custom programs, to use Linux (a tailor made version of Ubuntu if I remember correctly).
There is no reason, other than bribes from MS, that the NHS shouldn't do the same, again with the exception of the specialist custom programs for the expensive equipment which should be on its own network anyway.
"There is no reason, other than bribes from MS, that the NHS shouldn't do the same, again with the exception of the specialist custom programs for the expensive equipment which should be on its own network anyway."
I was just in an NHS hospital. The standard PC doesn't run XP. It's the non-standard ones linked to MRI machines and other networks that have XP. Anything that wants that MRI image needs to be on the same network, as you aren't easily transmitting 500GB of data a time through an air gap.
This post has been deleted by its author
...if it's not connected to the internet.
Forget the cost of new hardware/anti-virus. What's the cost of employees wasting time?
I've got several consistently underperforming help desk staff on final written warnings for watching YouTube playing browser games for hours on end. Now they'd rather stare off into space than use remedial training materials to become competent.
2 of them have even gone to HR to complain about how unfair it is to expect them to do what they're paid for!
The cheapest solution is to run 2 networks and have most devices have no outside connection.
Unless many of the machines in question can't grok two networks and MUST be able to reach the Internet to reach other doctors, institutions, or whatever. And as noted by this attack, just ONE machine able to reach the Internet is enough to breach through into the LAN, and since the two networks usually MUST be connected at some point, they just pwn the bridge.
Unless many of the machines in question can't grok two networks and MUST be able to reach the Internet to reach other doctors, institutions, or whatever.
If you can't understand that it is possible to have READ ONLY access to certain network shares, and to have devices that act as a bridge between certain networks, then perhaps you might want to look for your reading material elsewhere? If a network admin can't do this, perhaps they need to relocate their office.. Like to somewhere not in IT. Even I know of a few ways this can be done!
That's not really an option for a lot of jobs now. You try and diagnose an Exchange server fault without being able to check the error code from Microsoft's KB. Or get email via Office 365 without a web connection. Or get the latest set of medical papers to read through without access to JSTOR. Or any one of a hundred thousand jobs that aren't minimum-wage helpdesk crap.
Sure, low-end ITIL helpdesk staff who are mostly call centre flunkies given permission to reset passwords can be kept off the 'net, if you're attempting to drive them to suicide through frustration and boredom; you're already discovering that the side-effect of that is that their morale drops through the floor. But most people have jobs where productivity is increased by web access, and restricting it too far will damage both your ability to recruit and keep the best people, and will harm the company's productivity and competitiveness over the longer term while your staff struggle to answer questions that could be solved with 5 minutes on Google.
Any decent web filter or proxy can weed out serious productivity killers, like Youtube or flash games (though the best way to do so is actually motivating your staff, rather than treating them like convicts). Cutting the cord to the 'net altogether is not a sensible alternative for most users.
"...if it's not connected to the internet."
That's the exactly wrong approach for a lot of business needs nowadays. People need to be able to access services over the internet. How else is your GP going to google your symptoms (ha ha only serious)? What they don't need to do is download and run arbitrary software on their workstation. What they should have is a network-oriented thin client where everything runs over the network and nothing executes locally, except the built-in browser, Citrix client or whatever.
"...watching YouTube playing browser games..."
I knew Google was putting a lot of time and money into YT, but I didn't realise they'd brought it that far!
Good luck with the staff issues. A bit of light reading at El BOFH could give you some clues ;)
You have to turn off ancient cruft that is only there because it was in Windows 3.1 and has been updated since then about as much as the Add Font dialog. In this case, SMBv1.
And anything that does need SMBv1 should probably be on an "old ancient icky thing" LAN separated from the rest with no Internet access.
Raise awareness with your users. Take your time to inform the people who deal with incoming e-mail about the risks, explain (in terms they can understand, not everyone is a geek) what the danger is and why they should never "just" open any attachment and most of all: be there when it counts.
In other words: take them seriously. And take initiative. Have you been to certain points of risk this week to tell 'm about the possible hazards?
Most of time times when administration opens e-mails (and attachments) because they usually don't really care. Why should they? When they call for help IT staff is usually acting like a jerk towards them (in their perception anyway) so duh; their fault for not keeping the stuff secure.
Usually the best solution is also the simplest and therefor also one usually ignored.
The problem is, nothing turns people's brains off quicker than a lecture from your average IT security consultant. I've had some success by introducing humor, which helps keep them engaged, and by emphasizing that they're also at threat outside work.
But in general I agree. The long tendency of Infosec to just say 'Thou Shalt Not' rather than explaining 'you can't go there because it will literally destroy the company and your job with it' has led to most people perceiving them as a nuisance and a cost centre to be worked around, rather than expert professionals doing a difficult job to keep them safe. Pointing out that they access their bank details/personal email/whatever on their work PC and so need to keep it safe helps a little, but you really do need to sit down and explain what the threats are and how they work.
"What do you do when the one doesn't get it is on the board?"
What you need is something that starts up when clicked and displays an animation, flashing text whatever saying something like "Deleting all the files on your Network", "Kiss your business goodbye" and the like for a few minutes. And then ends up with a message "Don't panic, that was just a warning. Go and offer your IT whatever they need to secure your system."
Then get someone to email it to them from outside the business.
"... it has led to most people perceiving them as a nuisance and a cost centre to be worked around, rather than expert professionals doing a difficult job to keep them safe."
That's because if infosec got their own way, the end result would be that IT was so locked down that employees couldn't do their jobs properly.
Oh, and that many of the restrictions are just security theatre that don't actually achieve anything, apart from antagonising the very people infosec should be trying to win over.
It's not about risk anymore, it's that security trumps everything. So much so that even the politicians have jumped on the bandwagon.
Raise awareness with your users. Take your time to inform the people who deal with incoming e-mail about the risks, explain (in terms they can understand, not everyone is a geek) what the danger is and why they should never "just" open any attachment and most of all: be there when it counts.
I think defence in depth is best. Yes, make users as aware as you reasonably can of these issues, but don't rely entirely on their diligence and understanding. Do all the other things you can too.
After all, I don't suppose there is anyone out there doesn't bother with user accounts and just gives all their users the root password and the injunction to be very careful. You could in principle, but I guess the results wouldn't be very pretty. So, there are other mechanisms in place, such as access and privilege controls related to job function, plus backups, recovery strategies etc.
You are way too optimistic about users actually paying attention, much less heeding you.
Several years ago I had the (dis)pleasure of fixing a machine where the user admitted he deliberately opened an emailed virus because he was curious to see what it would do and didn't want to mess up his personal PC to find out.
"Most of time times when administration opens e-mails (and attachments) because they usually don't really care. Why should they?"
Agreed. Even the best-trained, highly motivated staff can make mistakes, or lose attention. Industry organizations should cooperate to expose current staff members with POISONED-INDUSTRY-BAITS. Independent organizations could then alert the sys-admin that certain staff members are inattentive, irresponsible, etc ... when there are consistent patterns of non-accidental breeches.
and we have quite a number of older OS's, and it's not simply a matter of software compatibility.
An all too common scenario has been:
£450,000 budgeted spend on a microscope or other piece of equipment.
£320,000 of that is the actual hardware of the microscope, lenses, lasers, cameras, power supplies, cooling systems, heating systems, incubator enclosures, motorised stages etc.
£125,000 of that is a service contract for the next 10 years or so.
£5,000 is a PC (or two) to control it all, gather data etc, with the usual OEM markup. It's a Windows XP machine, custom built, no antivirus because that cocks up the timing and eats up clock cycles and you've bought it from the manufacturer and they've done everything in their power to find an anti-virus that works without stuttering when you're trying to count individual photons on some Intel Core CPU that was state-of-the-art at the time.
5 years later on, and XP is unsupported. The PC is showing signs of capacitor rot, and the storage is getting all filled up in a single experiment. Not only that, but the ISA slot that the custom built capture card fits into is getting as rare as unicorn shit. Can we get an updated PC please, Mr Microscopemaker? Yes, of course, comes the reply. If you get a new microscope at the same time. Because we do a Windows 7 PC, but we can't get hardware with ISA anymore, it's all PCI now, and the PCI version of the card has a different camera. And the camera has a different whatever which means you'll have to change the 'scope's nosepiece, which means a whole new incubator box, which means... etc etc
And if you CAN find an older PC with the right interface, it's got some form of incompatibility with a newer OS probably, or it's too slow for the bloatware OS.
Oh, what a joy it was when the cameras fitted to microscopes started using IEE1394 cables, and were all industry standard! You could fit a decent Firewire card, fire up some generic video program and see the camera output without having to start Zeisslympuskon Control V11. The control interface for moving the stage and switching objectives was still PCI or serial port or parallel port, but now you get PCI-e and PCI-x and PCI-whatevertimeswhatever... and if you CAN find a motherboard with a plain old PCI slot on it, it's usually just the one and bridged with a chip that introduces a few cycles delay, or hasn't got the full range of interrupts available. Or if it was RS232, or parallel try finding one of THOSE on a modern PC without having to compromise on some other part of it.
So then they started using USB for controller interfaces.
And then IEE1394 started to mutate.
And USB started to mutate.
And suddenly the lifecycle of a usable piece of equipment starts to shorten...
So now what we tend to do is to buy several PCs, put one into storage as a spare, put another outside the room on a dedicated link to the first and have that one sitting on two networks, and then push data from capture PC to process PC, run antivirus on the processing PC along with the manufacturers analysis software which ALSO runs on the capture PC, push from process PC onto a network share. It means TWO copies of the expensive software, and extra PCs at the time of initial purchase, but it's the only way I can see to actually being able to keep these rigs going for anything exceeding or even approaching 10 years.
"[...] put one into storage [...]"
Unfortunately electrolytic capacitors degrade even faster if they are not powered up. I saved some scavenged AT PSUs as spares. Come the day I needed one - they were all dead.
Even an archived hard disk proved to have died after a few years stored in a cupboard. Luckily it was only being powered up for an end-of-life wipe.
The spare comes out every now and again and gets powered up when the engineers come to do a software upgrade of their application.
But I've not heard of electrolytics having a shorter shelf-life if left unpowered. Certainly not in the 5-10 year range. Is this a real thing?
"But I've not heard of electrolytics having a shorter shelf-life if left unpowered. "
It was well known many years ago that a piece of stored equipment may have died due to its electrolytic capacitors failing. IIRC one technique was to feed them a slowly increasing voltage so that they would be restored rather than suffer a catastrophic failure.
Here is an indicative comment elsewhere.
Some PSU manufacturer web sites give advice on sizing the capacity for a PC. IIRC they say to over-size in order to allow for a reduction in capacity over about three years as the capacitors age.
" IIRC they say to over-size in order to allow for a reduction in capacity over about three years as the capacitors age."
Delayed edit!
IIRC they say to over-size in order to allow for a reduction in capacity over about three years as the capacitors age due to use - viz temperature and ripple current.
We have spent the week fundamentally changing the way we manage our office networks, in order that we have some protection against Cryptolocker, WannaCry and other ransomware attacks.
Should we have done this before? YES
Could we have done this before? NO.
It's only due to the widespread publicity garnered by the WannaCry attack that our Directors and PHBs have been stung into releasing the necessary funding to allow us to do it.
Luckily, we've long had a plan ready to implement.
So our backups are now on a separate LAN, with no direct routing, and no SMB connectivity.
We've also restricted SMB between individual hosts on the LAN, and moved all non-essential hosts (directors' phones, laptops, tablets etc), to a separate WiFi network, with no access to the corporate LAN.
It makes life harder to do certain things, but it does mean that even if the boss's secretary clicks on an attachment, or a link in an email, we are probably going to survive it.
I'm feeling a lot more comfortable at the end of this week, than I was at the start of it.
I confess I have one PC running XP because it's all it needs for the single purpose the PC is used for.
It is however on it's own sub-net which is not allowed to connect to the internet which is what anybody running a redundant OS should do.
Run redundant OS's if you must but only an idiot lets them talk to the outside world.
"Run redundant OS's if you must but only an idiot lets them talk to the outside world."
Nothing wrong with them talking to the outside world if that is an essential part of the function - but allow no internal connectivity. Not forgetting a back up of the system to reprime them if the worst happens.
So what if it's essential function is as a bridge or some other function that requires BOTH Internal AND external connectivity? And because of the custom software, it has trouble with proxies?
So what if you try a spell in the real world for a change? Instead of these rather disingenuous arguments you keep coming up with?
Not forgetting a back up of the system to reprime them if the worst happens.
It's the backups wots important. Ransomware or head crash, your data is just as trashed. Many HDD's have a bios chip that (IMU) encodes the data on the disk, and if you replace the circuit board on the HDD the bios chip has to follow the data for the data to be useable. Lose that, data goes with it.
But if you have a backup, your disk can be trashed and you are back up in the time it takes to restore the data.
When I was working full time and responsible for our data alongside general IT support, alongside my real job, I'd assumed that things would go wrong, somehow, so I made lots of backups, pretty much ad hoc, that were not all stored on the network. But when I tried to get the fully professional IT dept to organise off-site storage for properly organised back ups it just wasn't seen as a priority. Until the volume of data made it impossible to collect it all and data protection made taking data discs home a no-no. Suddenly they started to take it seriously. Didn't come up with anything proper for a long time, but at least they swapped a series of external hard drives round each week and kept the recent one in a safe.
But somehow, if updating OSs is a poor relation, then back-up systems are the dodgy uncle that nobody ever mentions.
I absolutely agree... threats to file storage should be mitigated at the storage level of an enterprise architecture rather than relying solely on the A/V and O/S level to defend against them.
Suitable file systems, such as ZFS, can provide a defence through snapshots, but even just a regular frequent rsync to an administrator-only share would provide some defence.
This post has been deleted by its author
Part of it that no one wants to spend on on backup hardware, the media that said hardware uses, the software that the backups require, and the administrative overhead that managing backups entails.
A number of years ago, I was doing some work for a company and they balked at paying ~$700 USD for a re-built DLT drive to replace the one that had packed it in. It was on their only server, and it was their only backup method. I asked them point blank, how much their business was worth, because if they didn't have good backups, they were a server failure away from losing it all.
They paid for the tape drive.
Part of it that no one wants to spend on on backup hardware, the media that said hardware uses, the software that the backups require, and the administrative overhead that managing backups entails.
Always a shame that. There is plenty of adequate software for most home/SMB level users that will do a good enough job for free (even Windows had a decent incremental backup tool at one time, at least in the XP days), and the time taken to administer is creating the backup system (schedule etc) plus costs of media. With USB drives you can back up a few machines easily enough, especially if you only backup what you need. Once done you only need someone to remove the media at the end of backup and swap in the next lot just in time for the next one. I know even $100 for an external HDD can be a lot for some businesses, but pick it up when you have the spare cash, and save for it. It can save your business if something nasty happens.
And if you know someone trustworthy who has a bit of spare disk, and you have an appropriate level of data connection, you can look at syncing stuff via online methods easily enough.
It's odd that so few people do backups when they're relatively easy to do, and don't require a lot of downtime. $100 + 2 minutes a week per machine covers you pretty well for most things. Even ransomware (so long as your backup disk isn't connected when hit, or you have a couple of sets of backups, at least one that cannot be altered at that time).
And restores? Who ever bothers doing a trial restore occasionally these days?
I visited an old customer yesterday who wanted a quick showing of how to use backup software. Hoping that the Win7 built in backup is at least as decent as that which was in XP (which I did some trial restores with back then because I was being paid by the hour and a few hours sitting with in the factory office with their nice fat net pipes browsing whatever and being paid while I waited for the blisteringly fast HDD speeds (primary backup was to an internal HDD which was then mirrored offsite over the rest of Friday night) to do a restore - once piece of MS software that worked fine (using other software to sync the files at the remote end). As he'd never had a backup done, and I teach "use an external disk, ONLY plug it in when you're doing backups, and preferably store it well away from the computer" we were using a USB drive. Took the system 3 or 4 hours (after I left) to complete the backup. I tell them to get a second disk when they can as well, in case the first fails.
To do a trial restore requires that much time as well, and if the restore breaks the machine? (If I had the "spare drives" available I'd replicate a HDD failure/replacement for that, but for now I don't have a spare 2tb hdd to use in cases such as his).
Similar to fire-drills, rehearsing BACKUP-RESTORE should also be done. This will allow estimate of the data & time lost, and give a better estimate of times & costs, factored into peak-usage times.
Similarly, replacement times of hardware should be known. Costs & delays of off-site & cloud backups are constantly changing, so these need yearly or half-yearly estimates as well.
Still waiting for Facebuck / Googhoul / M$ / LinkedIn / Uber etc to get hacked and have all their juicy slurp ransomed off. No such luck yet! But it feels now like that's the level of seriousness before tech complacency wakes up.
Meantime let our dystopian 'DeepMind' future roll on... The march to the end of all privacy is like celebrating 1984: 'Thoughtcrime does not entail death, thoughtcrime IS death ...... We shall abolish the orgasm'.
sadly the gods above us have chosen path 3.........once id stopped crying into a bucket, i ignored and tested everything in between and applied. Im paid t mitigate risk to my direct company, if my parent corp choose a different path, thats their call, but im allright jack ;o)
even now there is still refusal by certain sectors....it beggars belief, I am not one of those sites thank ****
...."you just can't protect yourself against your own users".....
* Plus being a patch monkey only gets you so far, as you incubate each patch and wait for approval from execs. In short its not just about cost, its about loss of revenue, or the perceived loss from downtime due to bad patches. This also hampers corporate response time.
* If installing the latest 'Creators turd' will 'bork your quarterlies' that's going to dampen executive spirit. Another key aspect is demoralization, or the loss of respect and total annihilation of IT pay aka 'no reward for doing the right thing'. Such is the sad state of IT... Who wants to stick around for 'all the pain without the pleasure'....
=====================
https://forums.theregister.co.uk/forum/containing/3166772
https://forums.theregister.co.uk/forum/containing/3126671
Windows is not 100% backwards compatible -- upgrading the OS to 'the latest' not only incurs costs but it also runs the risk of borking key applications. There are also plenty of systems where you just can't upgrade, systems such as machine control where you can't update the system without significant risks -- the system just won't work or running into safety and certification issues.
Windows fails because its not modular -- its an 'all or nothing' system where you can't remove the components that you don't need. It should never have been used for SCADA and machine control applications but we're stuck with it so we have to make the best of a bad job.
Yet another reason why SCADA systems should be isolated as much as possible from other networks.
We (myself and the network engineer, along with our boss) have at least managed to convince the business owners of our SCADA systems that:
a) Running the server on what amounts to a slightly beefed up desktop will no longer cut it
b) To actually *involve* the IT infrastructure people when upgrading/replacing it, so it gets done in such a way that it's not a radioactive cesspool of security problems.
Anon to protect that thing called a paycheck.
" It should never have been used for SCADA and machine control applications but we're stuck with it so we have to make the best of a bad job."
And what did the jmachine control, SCADA, etc jobs before Windows came along?
Was it written in stone that "we're stuck with it, forever", or did something different and (at least on paper) cheaper come along?
Now that the real costs of buying cheap (but inappropriate) are a bit more visible, now that the costs of cleanup and risk are a bit more visible, perhaps yesterday's solution wasn't as expensive as it supposedly looked - **asssuming** that the cost of unnecessary chaos comes out of the same empire's budget as decided Windows was a better fit because it was "cheaper".
Or perhaps tomorrow will bring along a tried tested and proven new solution which can be trusted to be cost effective over a timescale of a few years rather than the usual IT vendor lifecycle of a few months.
Such things have been around since the 1990s (if not longer) but they're not shiny and new every few months. Which is exactly what's needed in *some* cases.
YMMV.
NB I thought this week's analysis had concluded that out of date and unpatched OSes were a tiny part of the WannaCry outbreak. Or don't facts matter, when MS and friends are telling a different story?
Exactly, and you are guilty of the same thing. Why on earth do we accept Microsoft (and every other vendor) declaring that their OS /software is not going to receive critical security patches within a reasonable lifespan of the hardware it supports and in the absence of 100% backwards compatability?
Microsoft knew about the problem, knew that many many thousands of users worldwide have little choice but to run XP, and *chose* not to release a patch until it was too late. There's the direction the finger should be pointing in. Not that they are the only guilty party... here am I running XP in a VM because I refuse to throw away a perfectly good scanner just because Canon don't want to release Win 8.1 drivers in the hopes that I'll throw it away and buy a new one.
"here am I running XP in a VM because I refuse to throw away a perfectly good scanner just because Canon don't want to release Win 8.1 drivers in the hopes that I'll throw it away and buy a new one."
Yes it's a familiar refrain but do take a look at Linux or BSD. They may well have a driver for it and you'll be able to run updates on the OS and the scanner will still work.
Yes it's a familiar refrain but do take a look at Linux or BSD. They may well have a driver for it and you'll be able to run updates on the OS and the scanner will still work.
Yup. Very good chance it'll run in Linux, and from the moment you plug it in.
Also.. If it's not something special, there's probably an all-in-one that's cheaper than a replacement ink cartridge that does at least as good a job on a more modern OS, that would pay for itself in not having to fire up the VM every now and then (of course, if it's a "proper scanner" then keep it going - those cheap AIO jobbies only last a couple of years!)
"Yes it's a familiar refrain but do take a look at Linux or BSD."
Linux requires less powerful hardware than Windows. True - some hardware runs better in Linux, especially older, popular hardware. Windows software can also run, via WINE.
Like Modems? WiFI adapters? The list of incompatible devices for both classes is long and notorious (mostly because a lot of the built-in devices are included--don't count on anything from Broadcom to work natively).
And as for WINE, it can be hit or miss, especially for high-performance stuff like games (which also make them less than ideal for virtualization since 3D is one of the weaker things to be virtualized).
WTF - "Why on earth do we accept Microsoft (and every other vendor) declaring that their OS /software is not going to receive critical security patches within a reasonable lifespan of the hardware it supports and in the absence of 100% backwards compatability"
A - personally, I class 17 years as well outside most reasonable expectation of hardware support (most enterprises work on a 5 year hardware refresh cycle in my experience)
B - for YEARS everyone has been complaining that MS have NOT removed all backward compatibility - now you are moaning that they are not keeping ENOUGH backward compatibility.
C - most of the security issues faced by modern OS's are because of backward compatibility - the Wannacry worm exploited a hole in SMB v1 FFS!
Unless the system you're trying to virtualize has custom hardware. A virtual machine cannot virtualize what it doesn't know, and a black-boxed custom ISA interface card is about a non-upgradeable and non-virtualizable as you can get. And if the manufacturer refuses to replace the computer without replacing the entire works (at a six-to-seven-figure cost), what are your options?
Unless the system you're trying to virtualize has custom hardware. A virtual machine cannot virtualize what it doesn't know, and a black-boxed custom ISA interface card is about a non-upgradeable and non-virtualizable as you can get. And if the manufacturer refuses to replace the computer without replacing the entire works (at a six-to-seven-figure cost), what are your options?
Google is your friend my friend. It's not all doom and gloom for those with ISA cards, though some stuff (MRI etc) may have a harder time. Shame I don't have a couple of $hundred to spare, I'd grab a 2nd hand reasonably modern mobo with an ISA bus and have a play with some old ISA cards, see if I could plug them into a VM. Don't suppose someone in NZ wants to part with such a board?
And the manufacturer is NOT your friend since you can't replace the machine: it isn't yours to mess with. Remember that infamous boilerplate: Breaking this seal voids all warranties and service agreements.. It's basically an untouchable machine that's an integral (and to the manufacturer, inseparable) part of the six-to-seven-figure whole. And no, airgapping won't be an option since it has to be able to transfer the fruits of its labor, and a USB drive can pwn a machine just as easily as a network connection.
It's basically an untouchable machine
You know that the manufacturers don't expect the machines to be RTB for repair, right? I guess not. You know that we're talking machines that are a few years (maybe even a decade) past their warranty end period, right? If they were still under warranty this wouldn't be an issue as the manufacturers would provide a fix.
BTW, I've spent a number of years in computer and electronics repair. There is no "Breaking this seal voids all warranties and service agreements... Ever. No such kit could ever be serviced because servicing (aside from software updates) does that. Opening by a non-approved person may do so, but many places are fine. I've had HP send out a rep to do a job under warranty where I had already opened the machine as a non-HP tech but they still honoured the warranty as they saw our handling practices were safe. (a spill would've voided the warranty and also in the case of a spill time can be essential to saving the machine, minutes can make a huge difference!)
No, you can't argue that the manufacturer is closed down as you were just talking about the machines still being under warranty.
Jake gave you a suggestion recently. I do strongly suggest you take it. I've seen some great stuff from you and do hope to see more, but this "being disingenuous for the sake of argument" really does get annoying at times.
"Jake gave you a suggestion recently. I do strongly suggest you take it. I've seen some great stuff from you and do hope to see more, but this "being disingenuous for the sake of argument" really does get annoying at times."
I REFUSE. It's called Playing the Devil's Advocate. Besides, (1) edge cases don't stay edge cases for long, and (2) I've seen enough edge cases firsthand to build a tesseract.
"you just can't protect yourself against your own users"
There is no silver bullet to security and multiple layers of defence should be used. Of-course, anti-virus can only protect for known issues. As for your own users, using a solution that enables white listing of running processes is going to help.
That for almost ALL encryption malware, the Microsoft Management Console can cut off the main vector of damage by these type of attacks. Will the worm and/or virus part of the attack do damage? Perhaps - But no backups will be compromised and by isolating the machine, and scanning after zero day, the malware can be removed. I've done this sort of thing for a long while, but I'm not the author of the snap-ins and MMC concoctions; you'll have to configure them your self, or buy into some coders that can make the changes for you with configuration tools. It probably takes about 100 actions to properly configure the MMC to block these attacks. If you combine that with a good enterprise anti-malware solution, you put yourself well above the low hanging fruit for sure!
None of my clients tested their backups before disaster - so I had to do them anyway, and I certainly found what works. With those kind of backup applications you never have to worry about a recovery, as long as the drive you are imaging has the same geometry as the original. I always scan the backup drive outside the operating system, even if it was isolated during the attack - invariably I find the original attack package sleeping in the files. By removing it, I have always had a successful restoration.
With those kind of backup applications you never have to worry about a recovery, as long as the drive you are imaging has the same geometry as the original.
I've used a number of tools, mostly dd and Ghost. DD may cause issues if you image back to a smaller drive (partition isn't "closed off" properly, and if data was stored at the end of the partition it simply won't be there in the copy) but happily copes with larger drives (though your partitions will run out before the end of the drive, meaning you have some "unallocated space" to use as you wish). Ghost will give you a lot of options and at least for NTFS and FAT it'll let you re-size the partitions within the limit of used data (if if you have a 1TB partition with 1GB used, you can shrink it down to 1GB). Ghost does have issues with ext partitions when creating an image, but I'm sure I've cloned ext drives with it (ICBW).
I've used other tools which work much like Ghost where going to a smaller drive (larger than data size) is OK, and going to a larger drive is not even close to being a problem. That said Windows up to 7 could sometimes have issues with something, I think IIRC a fixmbr and fixboot (or whatever the 7 equiv is) would take care of that issue in a few seconds. But then 7 threw a fit and became unbootable if you committed the heinous crime of changing the sata port that the drive was connected to! (good test for skill level of someone fixing a machine, change the sata port and see how long it takes them to fix it).
I've never had issues with larger geometry drives, and few (quickly fixable) with smaller drives, especially with tools other than DD. In fact my most common use of such tools was moving a machine to a larger drive, second most being recovery from a dying disk.
I think the issue with ext drives is that ext is designed to allocate to fill to reduce rust seek times in a multiuser environment. So given any particular drive's size, odds are there will be stuff towards the end you have to preserve. So IOW if you're cloning an ext drive, you can almost never shrink it, only match or grow it.
If you buy equipment that is supposed to last a few decades (medical equipment, production line systems), then you might get support for a few years, BUT you get only support with the operating system you bought your machines with.
Also, there are quite a few hurdles, even in the cases when you can upgrade. Many such machines are calibrated and certified. Touch any part of the software, and you are in red-tape hell, as well as a prolonged downtime until your equipment is re-calibrated and re-certified (which could take *weeks* to *months*).
Yet we have modern equipment being provided with XP as the hidden underlying OS. Kit produced AFTER XP went EOL.
And the hypothetical medical imaging kit does exist. I use it professionally daily.
Might help if you understand the workflow and comms protocols to commit images from medical imaging device to storage - it uses DICOM (also look up IHE connectathon) and a FTP/SMB/other push to storage would break the whole lot. Lots of bridges between the vlans. Lots of fixed IPs, point to point configs - getting better with newer generations of kit. At least we have SANs rather than Tape!
There are many bits of healthcare software that have barely migrated from Win3.1 and DOSbox, running in a JVM with questionable GUIs.
We are hardwired into obsolete IE versions, no standards compliance for eg Chrome. Interdependencies are a bitch, one sytem that wont work except with version xx locks the rest into the old stuff.
The suppliers of the BIG kit (MR, CT, screening rooms) have new kit and versions annually. The stuff from 5 yrs plus doesn't get the new shiny and there is no return on investment to update unless cost+++
And there is no managed equipment replacement program anywhere in the NHS.
Wait until it dies as that is when we can justify the kit the hospital can't work without....
plus ca change
Working in a hospital IT myself, its a nice to see someone with the right mindset for once, but our risk department dont think there is any IT risk. everything on the register is clinical or maybe financial, but nothing from the technology side.
I have been banging my head against a brick wall, and managed to get the two words on the register Cyber security, but it still isnt red yet!
The board still dont understand how important the information systems are to the job in hand, and if they insist on running the patient administration system on a single standalone server, with no support, that it takes days to recover from a lost 15 mins, one day it will go TU and it will days to get it back online!
Beer for someone reconising the problem.....
Being in charge of risk for a huge, quasi-govermental organization I can tell you that very often the alarm sounded for specific risks (such as running XP) is ignored. Naturally, once the barn is on fire (thanks WannaCry) the IT side of the house desperately wants to lock the door.
The solution is to put the risk squarely on the group demanding the risk be taken. If I go to the "x-ray' department & say "XP is putting the entire place at risk here are options for things to do to reduce that risk" and their response is "It is cheaper to keep our old X-ray machines" Then THEY have to sign off that THEY accept the risk, not us. If we did not warn them or gave them poor information that would be our fault.
My guess is the risk folks at NHS arm's are tired from waving like mad men trying to call attention to the situation & the same old 'legacy apps' shit was all they were given.
But that doesn't work in that kind of bureaucracy. The departments in question are still part of the hierarchy, and they don't give or receive funds directly. They STILL have to come from the accounting arm which covers the whole works. Even if "X-ray" sign off on the risks, if something DOES happen and the X-ray machine goes down, how does the hospital get its X-rays, then?
Most people (even IT) don't realise that there are actually Embedded versions of Windows. Many use the same old screensavers, desktop look-and-feel and logon screens as XP, but are based on later versions of OS (with the lifecycle's of those products). Many medical imaging devices, ATM's, screens at the train station etc will be running these rather than the "full" desktop.
From https://support.microsoft.com/en-us/help/18581/lifecycle-faq-windows-products
How does the end of support for Windows XP impact Windows Embedded products?
Windows Embedded products have their own distinct lifecycles, based on when the product was released and made generally available. It is important for businesses to understand the support implications for these products in order to ensure that systems remain up-to-date and secure. The following Windows Embedded products are based on Windows XP:
Windows XP Professional for Embedded Systems. This product is identical to Windows XP, and Extended Support will end on April 8, 2014.
Windows XP Embedded Service Pack 3 (SP3). This is the original toolkit and componentized version of Windows XP. It was originally released in 2002, and Extended Support will end on Jan. 12, 2016.
Windows Embedded for Point of Service SP3. This product is for use in point of sale devices. It’s built from Windows XP Embedded. It was originally released in 2005, and Extended Support will end on April 12, 2016.
Windows Embedded Standard 2009. This product is an updated release of the toolkit and componentized version of Windows XP. It was originally released in 2008, and Extended Support will end on January 8, 2019.
Windows Embedded POSReady 2009. This product for point of sale devices reflects the updates available in Windows Embedded Standard 2009. It was originally released on 2009, and extended support will end on April 9, 2019.
Why does support for Windows XP Professional for Embedded Systems end with Windows XP?
Windows XP Professional for Embedded Systems is a specially licensed version of Windows XP Professional for industry devices, delivering the full features and functionality of Windows XP. Given this relationship, both operating systems followed the same release schedule and share the same timeline
Why will Windows XP Embedded be supported for two years longer than Windows XP Professional for Embedded Systems?
Windows XP Embedded is a modular form of Windows XP, with additional functionality to support the needs of industry devices. It was released separately from Windows XP and provides a separate support lifecycle to address the unique needs of industry devices. Devices running Windows XP Embedded will be supported through 2016.