Reply to post: Re: Typical problem of many large organizations

Missed patch caused Equifax data breach

Peter Gathercole Silver badge

Re: Typical problem of many large organizations

It is easy to say in hindsight that this patch should have been applied.

But just look at the volume of vulnerabilities, across all software platforms that a company has to watch and plan patches for.

Even in the most proactive organizations I've come across, planning and testing a patch deployment, and arranging for the necessary reduction in service as patches are rolled out can take weeks or months.

What most people don't take into account is that it is quite frequent that a patch changes a behavior or breaks something. One of the past mantras in changing systems used to be "only change one thing at a time", so that you could isolate which component breaks the system. But with the 24x7 nature of many systems nowadays, service outages are hard to arrange, so patches are bundled into releases. Because you are changing multiple components, it becomes important to test a release before deploying it, otherwise you get panned by customers and the press for not testing before release.

So you're stuck between a rock and a hard place. If you spend time testing, you're open to the vulnerabilities while you are planning and testing. If you shortcut the testing process, then you're open to breaking the services you offer.

In my view, and I think it is a very common view, there are two significant things that have to be done.

One is engineer the systems such that you can deploy patches to subsets of the environment while leaving the service running (for example a leg1-leg2 split), so that you don't have to have as many service outages.

The second is that you split your application up into discrete security zones, with the internet facing systems that are most likely to be hacked only having access to data on a transaction-by-transaction basis, with the data being provided under the control of the next zone in. Although this will not prevent data theft, it will prevent mass data extraction, so long as you have decent monitoring of transaction rates, and intrusion monitoring.

The systems holding the bulk of the data, for example the database servers, are in your most secure zones, and you make sure that even if someone gets into these systems, it is difficult to bulk export data out to the internet.

The more zones you have, the more difficult it becomes to hack in and export, especially if you use different technologies for each zone. Hopefully, with enough zones, one of two things will happen. Either the hacker trips some intrusion monitor before getting too far into the system, or they decide that it is just not worth the effort to get any data.

There are many other steps that need to be taken, but these two will mitigate software flaws, limiting the damage. Unfortunately, they have to be designed in from the beginning, and are difficult or impossible to retro-fit. This means that a small quick-and-dirty proof of concept or pilot often needs to be completely re-designed to make it production ready.

But too often, manglement see a working PoC, and decide that it can just be scaled up, rather than the necessary (and expensive) redesign. To them, it's all extra cost that they can't justify. And because many of the people implementing the PoC, especially if they are using newer technologies, are often younger and less experienced, they're not prepared to push back.

The result? Systems that are easy to get to the data through exploitation of only one or a small number of vulnerabilities, and easy to export the data across the Internet, together with a difficult patching process. A recipe for disaster.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019