If there's a new patch available for Windows, you probably want it applied as soon as possible. It is unlikely, though, that you will want to roll out that update to users automatically because whether they contain new features, fix bugs or plug security holes, patches can break applications. It makes more sense for IT to use …
Here in the UK patches pop up on Wednesay, so if a critically urgent patch is issued, we get it almost a day late. I suggested elsewhere on El Reg that servers could be based in Fiij (maybe not the best place with hindsight) or Australia, New Zealand so as the Earth rotates folk get their patches as they get up, instead of clogging up the whole network at 6pm Microsoft time.
Of couse WSUS is limited to windows. If you have a multi-OS environment other solutions are better/cleaner. Indeed some enable better patching strategies even in a Windows only environment. Anon because of ties to such solutions.
AV signature updates...
They *do* need testing. The recurring scourge of AV updates quarantining essential system files on a false positive proves this.
The only time we've had a 'patch' knock-out a LAN is when antivirus ate the login scripts.
That said, nowadays most of the serious vulns seem to affect third-party software like Java, Flash, Quicktime. Keeping this crud off desktops that don't need them is a better precaution than patching. As is a company policy that IE shall not be used on untrusted sites.
Beg to differ.
Patches are the first defence against exploits. In a world where headlining exploits are reported on an almost daily basis I deploy patches immediately and test retrospectively.
The exposure (and subsequent cost) of a security vulnerability is disproportionately high when compared with the often unlikely case of a patch disrupting user productivity. There are herds of studies that show this - I'm still blown away by IT outfits that have to justify their egos and income by holding back patches.
"f there's a new patch available for Windows, you probably want it applied as soon as possible."
No we don't. We need to test it first just in case it breaks something else. Then and only then will it be rolled out to the rest of the users.
This article has a strange style.
It's title leads me to believe it addresses an area I am genuinely interested in,
I read the whole thing with anticipation.
In the end, all I have is an empty feeling.
Obviously it is an ad for "Get a free report and consultation with an Agile expert"
Even when it is just advertising, I expect something of interest.
Why would I read the report after an article like this?
Only to relieve my curiosity.
Can the report be as empty as the article?
Can the report be as empty as the article?
Yes - most likely.
I kept reading thinking that there would be a funny punchline at the end ... but at the end of the day I think the question is .... what would Simon do?
Can we have a new icon for this type of article ... maybe a cash register?
I agree with Steve Davies... Patches first, any problems second.
it also help's to have a decent hardware / UTM firewall like watchguard/cisco's range in order to prevent most of the exploits from hitting said desktops/servers directly.
also add's an extra level of AV/spyware protection to email/ftp/web servers, prevents anything application wise you dont like irc, p2p, msn etc... and helps control the data flowing into and out of the network.
as for patching, WSUS and globally deploy the patches to the AD domain and if you have the spare servers available, patch them in batches having say 10% of the servers and desktops patching on one day then a business day to evaluate. all is well? allow the second lot of production machines to patch.
as a side note... considering almost all of the exploits i see now are adobe flash/reader based, why is it so damn painful to patch those software's centrally.
I was hopping to see a discussion on the PRODUCTION Environment and/or the AUTOMATION Environment and/or the DCS Environment. But most of all I would expect to see a discussion on the VALIDATED Environment.
This explains why Automation and Control type engineers hate the IT mob.
You will NOT patch the process control servers or clients or routers WITH OUT getting your patches cleared by the relevant manufacturer/control engineers. And that may take MONTHS.
MicroSoft patches have in the past crippled factories - and caused them to go bust; because some idiot from IT applied a patch that broke the critical production servers.
See; it was you lot that persuaded managers and bean counters to force us onto Windows platforms - instead of nice bespoke systems - and you are STILL insisting on patching locked down systems with out testing first - and no; not testing on WIn2xxx Ry - but on a per-production server - or Virtual copy.
Even with that level of control; Microsoft patches are still known to break servers and cause loss of production - that can run into days.. and cost jobs.
Patch then test or test then patch?
I'm seeing a fascinating amount of pedantical statements here both in favor and against testing patches (and AV signatures) before rolling them out. This amuses me mostly because like all such scenarios, this really should be handled as a risk analysis problem. In some environments, the potential exposure to malicious attacks is rather low, while the impact to the enterprise from a bad patch can be so great, only someone with a total lack of understanding would even consider rolling out ANYTHING, patches, AV signatures, even a 1 line change to a configuration file without extensive testing. In other environments, the situation is the exact opposite - no individual machine (or set of machines) is going to hurt the environment that much if they go down for a short period of time, but an actual intrusion would be enormously disruptive - and there are numerous vectors for such to occur. Not surprisingly, most situations are somewhere between these extremes.
To me, it is obvious that an IT professional needs to evaluate the environment, and create a patching strategy that matches the situation at hand. To attempt to create "The One True Patching Strategy" and declare that it fits every environment you are going to walk into strikes me as naive at best.
exactly - well put
trouble is; many IT professionals seem to have no concept of the need to understand what all those clients and servers ACTUALLY DO (apart from run Windows)
I hope you're in the IT crowd on one of my sites ! Any way; let me buy you a pint ...
Missed by that much...
While I agree that there is no one size fits all approach to patch management (or any software deployment strategy including AV defs), if your organization feels the need to test MS patches before releasing them into its environment, the same should be true of AV defs and non-MS app patches. In other words, if you need to test changes to your environment before deploying them, you ought to do so consistently.
To answer the statement, "Anti-virus signature updates don't need testing and approval, especially for enterprise-grade anti-virus systems that update their definitions three times a day," I give you the words of General Russel Honore: "That's B.S. It's B.S."
Yesterday NOD32, normally one of the more reliable AV products, started eating the main .exe from a sourceforge project we use extensively. Irony is that the executable in question is part of an anti-malware utility.
It is literally getting to the stage where if you run AV, then you can only install mainstream, big-box software. Anything else is liable to be treated as potential malware, and deleted without warning.
NIS even takes this to the stage of being literal - If a download hasn't been seen before, it pops a malware warning and deletes it, no questions asked. I start to wonder if Symantec could find themselves in legal trouble for accusing coders of writing malware without having anything at all to substantiate that claim, not even a positive detection.
Meanwhile, McAfee operate a vetting system for unknown websites which relies on information submitted by the public. While this is nowhere near as bad as Symantec's 'if we don't know you, you catch a bullet' approach, it is still wide-open to trolls. One such troll had evidently been using a 'bot script to flag tens of thousands of websites as malware-infected. Again, this raises questions over legal liability for maintaining a system which allows trolls to block public access to legit sites.
we set sophos to quarantine, that way if it goes wrong then we add that "wrong" file to the "ok" signature list on the server, all the clients miraculously work again.