back to article Everyone screams patch ASAP – but it takes most organizations a month to update their networks

The computer industry may have moved to more frequent software security updates – but the rest of the world still takes a month or longer to patch their networks. That is one of the findings in a new report by enterprise network bods at Kollective. The biz spoke to 260 IT heads in the UK and US about their systems and security …

Silver badge
Childcatcher

Patchy McPatchface

I am a dyed in the wool sysadmin that owns my own company (MD). I only have around 10 Windows and 20 odd Linux servers to worry about on a VMware cluster with a slack handful of SANs, switches etc and pfSense routers.

I can't manage to patch that lot to Cyber Essentials standard all the time because CE mandates patches applied within two weeks of release. That's a laudable aim and one to work towards but the real world has a nasty habit of intruding.

For example, recently (last two months) Mr MS unfortunately released a right old bugger's muddle of updates that broke Exchange a bit (ooh me Transport Service has died) and broke older and weirder SharePoints, and screwed Azure Sync (and the rest). I have also had RDP die on 2008R2 servers until I fix certificate perms and even which one to use. I really picked the wrong time to start restricting schannel stuff and enable other MS patches via registry keys.

I *am* the pointy haired boss and have absolute power (until my office manager kicks me into touch) and know what I am doing. I'm CREST accredited and can throw together a Gentoo box without bothering with docs. There are not enough hours in the day to patch things anymore.

I have a few customers to worry about and a few PCs as well

34
0

Re: Patchy McPatchface

MS just performs minimal testing on their patches these days before releasing them into the wild and seeing what breaks.

20
2
Silver badge

Re: Patchy McPatchface

We're their beta testers. And we know that, so we try to wait at least 48 hours after release to see if there are any wider signs of trouble (and recommended fixes) before initiating our own testing. So in answer to the question:

Why? What causes the delay?

I'd say it's justified caution. By the time the weekend has come and gone and you've negotiated downtime (a common consequence) for a reboot early one morning and given the business fair warning, the live systems don't get patched until 7-10 days after Patch Tuesday.

I was told last week that we're about to move to CE+, with as-yet unknown degrees of documentation to be thrown on top of the existing procedures. There was a sharp intake of breath from half the room.

14
0

Re: Patchy McPatchface

"MS just performs minimal testing on their patches these days before releasing them into the wild and seeing what breaks."

So much this! There was a time when MS actually bothered to test updates, and issues from updates were rare occurances. These days it's a rare month where an update doesn't break something. So is it really suprising given the very real and demonstratable risk of an update breaking things, vs the theoretical and possible risk of a compromise due to not installing it, that people may focus on protecting themselves from the greater and more common risk.

4
0
Bronze badge

Re: Patchy McPatchface

Have you tried employing more staff?

What my experience (20 years) of working in the IT industry has shown me, is that company bosses don't like to invest in staff, they would rather have their 4 holidays a year than actually run a company with a full deck, it seems the skeleton crew is the norm these days.

I despair too, because most of these organisations I have had the sad opportunity to experience, have made me facepalm endlessly to the point my forehead is bruised.

1. A Managed services company used a common password to access customer machines (password-01)

2. The Owner of the said Managed Services company didnt even know what the WEEE directive was after I told him he couldnt just throw electronics into the normal bin

3. A technology manufacturer who developed a web app to work with their devices, had a support account which would obviously be easily guessed and the password was the same as the username

4. A telecommunications company used 'folder redirection' in their domain policy, but the folder permissions were incorrectly set to Full permissions for everyone, meaning everyone in the company could read and write everyother employees desktop folders, my documents, etc. This company wanted to be ISO 27001 accreddited Bwaaahahahaha

5. I once started at a company and was handed a laptop that had not been wiped since the previous owner, I asked for an ISO from MSDN and was told there weren't any available, and that he had a disc at home, I asked if it was from MSDN, and he said he "thought so", but wasnt sure

6. Same company, was not given a deskphone, and was told to order one from Ebay and claim it on expenses!

I could go on and on and on and on and on and on, about the clusterfuck that is, IT management

5
0
Silver badge

Re: Patchy McPatchface

Pardon my ignorance - what's CE+ ? Not Windows CE surely?

3
0
Silver badge
Facepalm

Re: Patchy McPatchface

It's not just their patches that barely get any testing...

2
0
Silver badge

Re: Patchy McPatchface

what's CE+ ?

Cyber Essentials Plus.

1
0

Re: Patchy McPatchface

I will just leave this right here -

https://borncity.com/win/2018/07/14/microsofts-july-2018-patch-mess-put-update-install-on-hold/

1
0
Silver badge

Re: Patchy McPatchface

we have a similar sized environment. I just leave MS to be patched by WSUS. servers inclluded. we run hyperv cluster and that has cluster aware updating. All our MS is as patched as can be. the updates run 2 days after release on Thursday with the thinking that bad patches will be pulled by first day beta testers by then.

Our linux servers are very manual and take more time and notice to be patched.

0
0
Silver badge

Personally or in my workplace?

"Hold long does it take you to update your networks? And what is your future solution to the constant nightmare of security updates?"

In my workplace, I just leave it to the IT folks and don't pay any further attention to it (aside from being very thankful that's not my job).

Personally, I don't rush to update things immediately. I usually do it once per month, as anything more frequent than that is unworkable for me.

6
2
Silver badge

Testing, testing, and more testing

It seems to me that "network scaling issues" and "company policies" are just another way of saying "testing".

If only we could get a provider who was willing to certify, on pain of actually, y'know, paying money by way of compensation, that a system designed in compliance with their published spec would continue to work correctly after patching...

Ah well, I can dream.

25
1
Anonymous Coward

You do patching regularly and religiously omce you've seen the outcome of not patching.

A server outage due to a patch is easier to explain than a data breach lawsuit....you only have to hear about it from a friend to know you DON'T want one.

11
1

"A server outage due to a patch is easier to explain than a data breach lawsuit..."

Unfortunately it isn't. A patch is a change, and failed changes, particularly those that cause customer impact are ITSM black-death.

Customer: "Why weren't told you were going to patch?"

Us: "We patch every night and have told you so"

Cust: "Why didn't you give us the opportunity to test?"

Us: "Because we waited weeks the last time, and you still didn't do any testing that we could observe, other than asking us to switch if off one night"

Cust: "Why don't you use something secure?"

Me: "You don't like our mainframe and demand something shinier"

Cust: "Why didn't you tell us this change would break our geegaw?"

Us: "!@##$%$#"

14
2
Silver badge
Holmes

DUUUUUH

This is not just a sysadmin problem. Let's say you push a patch to a commonly used development framework such as .Net or (ptui) Java. Suddenly a business-critical application falls over, so you roll the patch back and report the issue to the developers, who say they can't possibly get to testing against the new version, so you go and badger their managers, who eventually come around, if you're lucky, to realizing that having a security breach is Bad, so they repurpose the developers to update their code. Might take a week, might take longer. Obviously, the breakage should have been caught in QA, but--bad luck--the entire QA department was let go/offshored/never existed because it was not clear to the PHPTB (pointy-haired powers that be) that they added any value. Some amount of time later, the vulnerability is fully patched. Rinse, repeat as needed for every motherfucking patch that comes down the pipe across your entire estate, but if your environment gets pwned it's the fault of the sysadmin team.

The problem is not "content distribution," and the Kollective can fuck right off. I'll go lie down now.

33
0
Anonymous Coward

Re: DUUUUUH

such as .Net or (ptui) Java.

Seriously? Someone prefers Microsoft? It takes all kind of people.

Suddenly a business-critical application falls over,

And that's because these "business-critical application" were written like someone hired shit-flinging monkeys for the whole pyramid (more like inverted pyramid, amirite?) from management down to the just-out-of-school "experienced developers" - then expected some viable delivery. There is no automated test code, nor even unit tests. There is no documentation. There is no log. Sometimes there is no source code, design or general idea what's actually supposed to happen. Programs running as sysadmin, on the Internet. In JavaScript. If there is a licensed tool or library someone finagled something so that the tool actually - perilously - succeeds in performing a tasks for which one would *really* need to pay the next upwards tick on the monetization ladder. For others, support has been left to expire and won't be renewed because "muh budget" and reactiving support after 5 years is top dollar. Or the original company went extinct - with the Intellectual Property ending up at Computer Associates or lost forever to reaches of mankind.

"Business critical" is a forever clown show of dumb slogans and wilful ignorance pretending that it won't be mugged by sad reality.

14
4
Mushroom

Patching Hourly

I'd patch hourly if it didn't break shit. Unfortunately it breaks shit. Not every time. Generally when you are least expecting it, and sometimes two weeks after it was applied, ... "Every time the EOM jobs run now they puke, and it takes hours for us to clean it up... What gives?" You can pie in the sky all you want, but if you change shit, you break shit. The more shit you break the worse your reputation, and the more push-back you get the next time you want to patch. YMMV

38
0
Silver badge

It depends

Most places I've worked at always test the patches first. So however long that takes.

6
0
Silver badge

Re: It depends

You don't have the luxury of time to test anything. As soon as the patch is released, the bad guys can do a delta of an unpatched system and the patch and work out the vulnerability within minutes. A teenager in his bedroom in Macedonia will work out an exploit within hours.

This isn't the 1990s. Many of your servers will be VMs, so you can take a snapshot, a clone or should it go monkey, restore the VM from backup. Microsoft patches are all installed via Windows Installer, so you can roll back very easily.

Idiots argue if you patch systems, you might break something. The truth is, if you don't patch systems, something's going to get broken.

Now and again you might break something by applying a patch. So what? Stuff breaks each and every day in IT, and each and every day we fix it. That's what we do. Stuff breaks, we fix it.

Ultimately you have two choices: patch ASAP or get pwned.

7
14
Anonymous Coward

Re: It depends

Because your 'so what' in some organizations can result in hundreds of thousands of the currency of your choice go down the drain. I've seen it. In a lot of places, the O/S patching is determined by the application you are running, not the other way around. What's the point of having an O/S sat there all nicely patched when your app won't start! Or worse still, starts but subtly doesn't do what it did before.

I am certainly not arguing against the need to patch - it is critical. But so is some level of testing, and of course buy-in from the application vendor. To patch at the drop of a hat - that way lies madness.

14
1
Silver badge

@ Mr Dogshit

It is obvious that you are not in charge of ensuring that over 1,000 people can work every day.

Neither am I, but have rather close relationships with people who do. And I have learned from them that patching is a tight-wire rope exercise in managing not only safety and machines, but people and expectations.

Yes, security is obviously preferable. However, you always have Mr Performer who just can't have a minute without his server access, because he is making all the money for the company so his needs trump server downtime needs. And since he is the guy bringing in a fair chunk of revenue, his managers are on his side.

Of course, the admin knows that if the network is breached, it will be his fault and maybe even his ass, but the divas are the ones who give the okay for downtime, not the admin.

12
0
LDS
Silver badge

"can result in hundreds of thousands of the currency of your choice go down the drain"

The new GDPR requirements and fees about data breaches that lead to personal data leak, especially if not disclosed in time - could change that approach...

It could also lead to better written application that should not break when a non-breaking patch is applied (unless the patch itself has big issues, of course) - too many applications are too fragile.

0
0
Anonymous Coward

Re: "can result in hundreds of thousands of the currency of your choice go down the drain"

Personally I think it will take time for GDPR to sink in, and even more time for the big Corps and PHBs to sit up and take notice. Probably after the few big fines hit the news. But it still doesn't entirely address the issue of application certification and support on patched O/Ss.

A lot of businesses are trying to use off the shelf software rather than house their own IT Devs (its allegedly cheaper, right). Once that happens, the only recourse is to badger the vendor - who more often than not is a small dev company that do not always understand the implications of running their applications in an enterprise environment (not all I hasten to add, but enough to make this a problem). So what then? The vendors do some quick testing on a system that may or may not be at the same patch level as your production system. You get a tick in the box and.....

You are right though - too many applications are fragile, and that (I believe) is a limiting factor in being able to patch (and I don't just mean O/S patching - the same would apply to database, middleware, application etc etc etc).

2
0
Silver badge

Re: It depends

"Idiots argue if you patch systems, you might break something. The truth is, if you don't patch systems, something's going to get broken."

The "idiots" are correct, though -- patches break stuff all the time. Your argument is solid as well. So, I guess the real message here is "give up, you're fucked no matter what you do."

1
0
Anonymous Coward

Re: @ Mr Dogshit

" the admin knows that if the network is breached, it will be his fault and maybe even his ass "

I n these days of patch hell, if i was return to sysadmin, I would make sure it was in my contract, that if I advised something needs to be patched ASAP, and management started crying about down time and insisted I waited, that ANY data breach after that time up until patching was not my fault, and would not be held responsible for and that blame would be directed at whoever objected to the downtime and I would get paid a bonus for cleaning up the avoidable mess....

I guess that with those terms, its unlikely you would get any job in IT, but I really would not want to go back to that stress again anyway. I'll stick to making a living on YouTube videos....

1
0

Re: It depends

You don't work in the mainstream Sysadmin role do you?

I work in the medical sector, and we have to have service packs validated by the vendors of medical systems (Linacs, CT scanners etc.), and in some cases even individual updates. If you don't have that, you're running a medical device without its CE mark.

If you do that, you're liable for any damage or death (yes, that's what happens when medical devices fritz sometimes) caused. Oh, and kiss goodbye to ever working anywhere again in the IT field. And maybe even get jail time for it (I've seen inquiries into medical tech where things have gone wrong, and jail time is a very, very real possibility).

May seem very easy to you that "something breaks, so you fix it". What happens when the break corrupts databases and takes down other (you thought) unrelated systems (oh, to have a nice clean delineation of systems!). It's an absolute nightmare.

That's why you test what you can actually apply first. This can take a couple of days; in the meantime, you're basically doing a risk assessment that says "The chances of us being hacked are lower than the chances of killing people/taking the company down for an undue length of time due to untested behaviour".

And that's the nature of a risk assessment; occasionally the risk materialises.

If you think things are a binary "easy" evaluation, you're absolutely wrong. Especially when there are limited resources/budgets to invest in systems to keep the infrastructure ticking along properly. Even with huge budgets, there's still an element of gamble.

Whichever way you go, you stand a chance of being damned, but by taking the test->apply cycle, there's a better chance of still having a job and a career at the end of it.

2
0

Re: "can result in hundreds of thousands of the currency of your choice go down the drain"

"the vendor - who more often than not is a small dev company that do not always understand the implications of running their applications in an enterprise environment"

So much, so often.

The 'solution' bought and already paid for by a non-IT part of the organisation proved to be from a two-person company - a 'sales guy' and a 'developer' - who thinks its their lucky day because they've found someone who's prepared to pay six figures.

Then

* you say "OWASP" and get a blank look,

* you ask them about licensing the database and find "that's not needed" and eventually "it's MS-Access"

:(

1
0
Silver badge

Re: @ Mr Dogshit

I'm a dev, not an IT person, so this might not be quite the same -- but I've found that it's not necessary to put butt-covering stuff like that in the contract. All that is necessary is to keep records of what you've said and done so that if someone downstream messes up, you can prove that you did your job properly.

0
0
Anonymous Coward

I dunno

We have had the rollout of the latest patched version of 7zip on the "TODO" list for about 3 months now.

5
0

Re: I dunno

If you understand the issue and you're not vulnerable or have mitigation in place that's fine.

If you don't know what the vulnerability is then you shouldn't be responsible for patching.

1
0
Silver badge
Meh

and in big End of town

unskilled, uncaring, not responsible socialised psychopaths often take great delight in denying change requests randomly to stroke their egos. Such power, and no responsibility for consequences. It is always the techies who get blamed. The other usual KPIs have been covered above. Usually summed up as overworked staff.

18
0

Re: and in big End of town

@Denarius

Absolutely! Change approval is the homeplace of charlatans and idiots with a god complex. The poor techie has to fill in the PIR and the RCA and be subject to enhanced scrutiny for every other change for the next month. CI/CD is a dream of children along with pink fluffy unicorns. For most of us, change is hell. Putting in somebody else's change is inhabiting somebody else's hell.

4
0
Anonymous Coward

Re: and in big End of town

Putting in somebody else's change is inhabiting somebody else's hell.

"I have no change approval and I must patch": A Harlan Ellison short story.

2
0

For some complex software, it takes a long time to figure out "what did they break this time", and you can almost guarantee that something got broke. I'm looking at you OpenSim.

4
0

I doubt the figures are anywhere near that. Everywhere I’ve worked has had at least a few machines that only get patched every 6 months or so due to it needing to be up 24/7. Sounds terrible bt you can’t blame the techies who maintain it, it’s always a lack of quality project management at the beginning that fails to consider it. Microsoft could do a lot to improve the patching experience by not requiring a reboot each time, that’d speed up server patching.

12
1
Silver badge

"Microsoft could do a lot to improve the patching experience by not requiring a reboot each time, that’d speed up server patching."

This.

2
0

I think the restart is to help keep the server working. I’ve known Unix boxes to stay up 10 years without a reboot, but Windows servers are more reliably when they get the monthly restart. I suspect the patch that MS re-release each month is mainly to ensure there is a restart.

0
0
Silver badge

Do you prefer being burnt at stake or quartered?

Do you prefer to let your network vulnerable if you don't patch ASAP or putting it down with a faulty patch if you do?

Looks like IT is in the following situation: " If the stone falls on the pot, woe to the pot; if the pot falls on the stone, woe to the pot; in either case, woe to the pot"

10
0
Silver badge
Unhappy

Re: Do you prefer being burnt at stake or quartered?

And now I have my new Sig.

Woe to the pot.

4
0
Anonymous Coward

Depends on the customer's cycle

My primary account of 4000+ Windows boxes takes roughly 21-28 days to patch completely. This covers Dev, Test (UAT, SIT etc), Prod and DR systems in roughly that order. Patching on Dev starts on the evening of Patch Tuesday after customer has assessed and given their own rating on criticality.

Our limitations are based on agreed change windows with the customer and requirements from the customer for x amount of time between Dev, Test, Prod and DR changes. We also have lead time requirements for changes depending on the environment so changes have to be raised with enough lead time to be approved before the deployment.

No two environments for an application can be targeted the same night (no patching of Dev and Test regardless of application complexity (even though the same teams do the post implementation validation for the environments.))

Mission critical systems DR is usually left for 7 days from Production to allow for any possible issues that may not be detected earlier ( occasionally used functionality etc.

Of course, if it's critical enough then it becomes a case of trying to patch as much as can be done (with more limited testing/timeframes) in a single night etc without overloading the patching infrastructure and having enough people to cover day to day operations.

Other accounts I've heard only patch once every 3 months.

0
0
Silver badge

Oh FFS

More than a third of IT managers – 37 per cent – view the slow installation of software updates as the biggest security threat they face; more even than idiot end-users choosing bad passwords (33 per cent).

More than a third of IT managers - 37 per cent + are idiots, said LucreLout.

Patching is important, very important. There, I said it. But your biggest risk is a slow patch? Erm, no, no it isn't. It's your developers.

The last bank I worked for made the mistake of giving every dev read access to everything in version control. And some devs had checked in secrets, such as connection strings, user names and passwords for service accounts, systems access, mail servers etc etc. I could have done anything I wanted to their systems and they would have had absolutely no way of tracking it back to me.

And thats before we get to things like prod access for builds and releases - allowing me to make users do stuff on other systems while interacting with my own desktop app (I did a demo of this for management using development environments, where by I had them checking their risk exposure on my system while it secretly executed trades on another due to their use of cookies/remember me). They still didn't fix the problems......

Your developers should be some of the smartest people in your organisation, and they should have a very good handle on exploiting software (hard to do defensive coding if you don't know what to defend against).

So when they think a slow patch is their biggest risk, I think they don't have a clue what they are doing and should vacate their roles immediately. Your biggest risk isn't that your admin applies a patch 4 weeks late, your biggest risk is your developers. Try to remember that when the temptation to screw them over inevitably rears its head - hopefully they know shitting on the admins isn't a good idea already.

2
0
Silver badge

Re: Oh FFS

I could have done anything I wanted to their systems and they would have had absolutely no way of tracking it back to me.

I worked in the SOC of a well-known multinational megabank, and I think you may be mistaken.

1
0
Anonymous Coward

Re: Oh FFS

But you forget, not all banks are created equal.

1
0

In an ideal world

You would have a pre- prod environment which exactly mirrored prod and a comprehensive set of automated tests that could verify 100% (or as near as you can get) that everything works. Patch that, run the tests and if all shiny then patch prod.

I haven't encountered many (OK, any) outfits that want to put in the investment to do this. Which is daft as it would also benefit your dev process hugely. Meanwhile, IT still gets the kicking when stuff is rolled into production and things break due to inadequate testing in an inadequate test environment.

6
0
Anonymous Coward

There's other considerations of how often you patch. If you have the tools and protections in place stop most nasties or hackers, then you don't have to be so strict. NSX is a prime example, so this server only tasls to that server on these ports, so why would it need a critical IE patch or Meltdown fix when there is no attack vector open.

0
0

Try patching an Exadata once a month LOL

It takes almost a month to get an Oracle Exadata back up and running after Oracle themselves patch it, if we patched it every month we'd never get to use it!

7
0
Silver badge
Holmes

Avoid the Bleeding Edge

After a few deep and nasty cuts from "OMG, gotta patch now dogma" I have learned to let others get cut on the bleeding edge or burned on the hot kettle... I'll wait and see if the patch really works as it says on the tin...

6
0
Anonymous Coward

Quarterly

...that is, at my employer discussions are underway about the practicality of moving to a quarterly cycle at some point. Don't ask what it's been historically.

location: City of London

sector: financial services

1
0
Unhappy

Prioritise carefully

I won't allow patching without testing... except very occassionally on Internet connected devices/servers.

Everything else gets a test cycle.

That can be 1 day, more usually two weeks, sometimes longer.

We have a LOT of legacy systems and applications that really rely on a cobbled together patchwork - and that means some patches do get rejected.

About to find out what that means for Cyber Essential Plus - but whatever the outcome, business operation trumps potential risk.

I'm not for changing!

3
0
Silver badge

Patching is like throwing rocks off a cliff. You probably want to look at what's below before you roll a big one.

We have over 20k end points and probably 2k different and significant applications. Many old, fragile and badly written, many involved in life and death services.

We have to be very careful when rolling out our patches to ensure we don't wreck anything significant in the process. MS current practices make this very difficult.

4
0

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Forums

Biting the hand that feeds IT © 1998–2018