back to article Should SANs be patched to fix the Spectre and Meltdown bugs? Er ... yes and no

Is the performance sapping spectre of the X86 Spectre/Meltdown bug fixes hanging over SAN storage arrays? The general assumption is "yes" but five suppliers say not. You would expect SANs to need patching; they run their controller software on X86 servers after all. UK storage architect Chris Evans writes: “Patching against …

Silver badge

Of course they're not patching

If you've got a high transaction system which does nothing but files, network, and database, it's going to get hammered. They put out a patch, you get 20% performance degradation, and then bosses start talking about contract negotiation and bigcorp lawyers will crawl out from under their rocks. No supplier wants to set themselves up for that.

14
2

Re: Of course they're not patching

This does sound like a legitimate use-case for an unpatched x86 system; if you're essentially using it in an appliance, and no third party code is running then it makes no sense to patch it and take the performance hit. If someone is able to run their malware on your SAN operating system, then they probably don't need to use Spectre or Meltdown to get what they want.

44
1
Silver badge

Re: Of course they're not patching

The other point is, generally, they are using proprietary OS or at least management shells and no standard ports / shell tools available. That means somebody has to first compromise the SAN in order to be able to run a customized for that platform version of Meltdown or Spectre... Which probably means there is no point running Meltdown or Spectre exploits, as you have already gained access to the device, which you shouldn't be able to do anyway...

I.e. if the attacker is in a position to run Meltdown or Spectre attacks on your SAN, then Meltdown and Spectre are the least of your worries! (At least for the devices mentioned in this story)

25
0
Anonymous Coward

Re: Of course they're not patching

I think datacore runs directly on windows?

0
0

Re: Of course they're not patching

None of these systems run a proprietary OS. Storage systems run standard operating systems, albeit cut down to what is needed to keep the hardware going and little more. Most run Linux, NetApp chose BSD and Datacore does run on Windows. IBM use AIX with DS8000, etc.

The storage code tends to either run in user space or as a kernel module being controlled by user space applications.

The point mentioned by the various spokesmen in the article is that you need to be able to execute code within the OS. The only difference between this and a "normal" exploit is you can do this as a normal user, as opposed to requiring root access. If you can't run code as a normal user then you can't exploit the bug, and on most of these systems, you can't run code as a normal user.

The only systems that are potentially exposed are those true software-defined systems. And by true software defined I mean where you get to choose the hardware as opposed to you can choose any hardware as long as it's this hardware, which while little more than marketing bullshit, ironically means that you're protected against this exploit.

Black-box does make a lot of sense.

2
0
Anonymous Coward

Re: Of course they're not patching

That's nonsense. The majority of the storage vendors do not allow external programs to execute on the storage array itself, therefore since its only passing data or executing firmware code only written by the vendors, the need to patch is unnecessary. It has nothing to do with liability or vulnerability of the system.

1
0
CIA

Re: Of course they're not patching

The article was aimed at the big storage vendors, not the SDS vendors. Big storage vendors, with the exception of one, do not run programs on their systems. Now, SAN programs that can be loaded on commodity hardware by customers could potentially run other executables on the same shared system. They should be patched.

1
0
Silver badge

Safe enough - IF no third party code

If there is no third party code on a computer system (including web access) then there is no need to patch for Spectre or Meltdown. A SAN appliance that is a separate computer with no third party code is safe against Spectre and Meltdown. A SAN appliance that has the capability to run third party code however is not safe.

20
0

Re: Safe enough - IF no third party code

In the murky commercial world, that is a over simplistic view of what the situation is however. I know of several SAN products that do not officially offer any way to get execution on them, but find the "secret" engineering backdoor, and you are in.

Do you implicitly trust the fox with the henhouse in this case?

2
5
Devil

Re: Safe enough - IF no third party code

@Outer mongolian..., if you find the "secret" engineering backdoor and use it to expose your system to possible compromise, that's your fault and you deserve what you get :)

16
1
Anonymous Coward

Re: Safe enough - IF no third party code

If there is a "secret" engineering backdoor then this is a much significnat problem than spectre or meltdown.

If a device is not intended to allow execution of any software other than the device software then it is a catastrophic security failure if a means to do so is found. Once an attacker has the ability to run arbitrary code on a device not intended to do so then there are almost certainly much eaiser and more direct ways of accessing information than spectre or meltdown.

It makes a lot of sense that appliance manufacturers take this approach. They may do a good or bad job at it but that is a different question.

20
0

Re: Safe enough - IF no third party code

This is the classic "its ok to bake secret recovery/engineering/legal intercept accounts into things" fallacy.

All I know is if I find it (and they don't fess up and tell me about these things beforehand usually), its there, so could others, I wasn't blessed with super powers or the ability to do things other clever people could not do given sufficient commitment or the right combination of circumstances...

6
0

Re: Safe enough - IF no third party code

"If there is a "secret" engineering backdoor then this is a much significnat problem than spectre or meltdown."

Go down and watch the team commissioning all your new hardware, discreetly shoulder surf them, if it has in life failure, see how the vendor's engineer recovers it. It can be very very enlightening.

These are our industries dirty secrets tucked away and not spoken of openly much because they make the life of people running the hardware easier on a day to day basis. Trot out the DC and pull that chassis and recover it back to base as per official procedure to get it back vs get a coffee sit at your desk and use the "shortcut" to make life easier. I know what the majority of (human) people would do.

People leave teams, move companies, talk to other people inappropriately occasionally, find things independently when they shouldn't and other shenanigans. Yes its been our role if its discovered to have that removed or controlled when it becomes known but then you are into asking for vendor fixes for issues on a black box appliance. Are you suggesting this simply does not happen?

Its a much broader topic I agree, but its why I have difficulties taking at face values any statements from PR releases that something is a black box system therefore does not require any attention to the insides. Ever.

Last post in this thread.

9
1
Silver badge

Re: Safe enough - IF no third party code

If they can get access through a backdoor, then Meltdown and Spectre vulnerability is moot. They already have full access, so don't need any further exploits.

11
0
Anonymous Coward

Re: Safe enough - IF no third party code

These are our industries dirty secrets tucked away and not spoken of openly much because they make the life of people running the hardware easier on a day to day basis.

If an engineer at the company I work for were to create such a backdoor in software that shipped to a customer they wouldn't just be looking a dismissal, they'd probably be looking for a lawyer to keep them out of jail. That's the kind of 'dirty secret' that can kill a company. We do design, code and security review with the precise aim of making sure that such stuff doesn't get put in.

14
1

Re: Safe enough - IF no third party code

What about Pure Storage Purity Run functionality? It gives you possibility to run VMs and containers on FlashArray controllers. So they have to patch FlashArray.

And if we are talking about some vendors who use Linux as base for their storage OS, for example EMC Unity is based on SuSe. Meltdown/Spectre patch is a kernel patch, so eventually EMC will have to update their OS to newer version of kernel and my hit performance degradation.

2
0

Re: Safe enough - IF no third party code

> Meltdown/Spectre patch is a kernel patch, so eventually EMC will have to update their OS to newer version of kernel and my hit performance degradation.

Nope, the patches mostly add the *option* of fixing the bug. You can disable them when you build the kernel, or (for most of them) with a runtime config option. So EMC can disable them.

2
0
Silver badge

Re: Safe enough - IF no third party code

If an engineer at the company I work for were to create such a backdoor in software that shipped to a customer they wouldn't just be looking a dismissal, they'd probably be looking for a lawyer to keep them out of jail.

Agreed. The days of hardcoding a special field service password or Nintendo-style backdoor into enterprise hardware is over, the publicity if it is found out would be a killer - consider that for it to be useful your employees (some of whom will eventually become ex-employees) have to know about it!

Besides even the built-in backdoors that used to be common were just a way to get in as 'admin' (or sometimes "admin plus") if your customer forgot the password. Even if they dropped you down to some sort of shell, there's a long way from that to having the proper build environment to compile something that will run on it.

5
0

Re: Safe enough - IF no third party code

There is no secret engineering backdoor to any storage product I know and I have worked on a few. There are ways to get in, the most obvious being the root password, which will be known to the vendor. However, this will give you root access and having root access renders the threats under discussion moot, as the whole point of them is that they allow malicious code to gain access to areas of memory normally accessible to root alone.

1
0

Re: Safe enough - IF no third party code

Purity Run that runs a windows file server vm on your controllers is now pretty suspect. Pure has said that if you don't have that feature then there will be no need to patch.

1
0
CIA

Re: Safe enough - IF no third party code

Purity //Run is not a disabled feature. It is always on because its built into the Purity OS. They advertise all of their features as ALWAYS ON... you can't disable them. This IS the one system from the major vendors that DOES need to be patched because it can run executables within the VM's running on the system in a shared memory space on the controllers, that's what its there for (although Pure markets it as 'using the additional idle resources on the system and getting the most from your storage system'). Surprisingly Pure hasn't addressed this publicly. Likely they are scrambling to put a fix in and push it out in the next version of Purity before someone throws a bombshell their direction. And it would put a dagger in their upcoming EOY financial announcements. It's all about strategy...

0
0

Some of the responses are true, they're x86 but not all Linux underneath. Netapp for one was originally a fork of a *bsd (as anyone who's played with the 22/7 menu will be aware). A tool reported security issues in a Netapp Filer during testing although I couldn't reproduce the attack manually, due diligence process meant it had to be raised as a incident and after some work with NetApp themselves, the tool was found to be misidentifying the version of the daemon (relying on simple version string), and code analysis shown they fixed the vulnerable code in their library but didn't bump the version string up, so to a dumb analysis tool, it looked like it was open to the world to attack.

For the others, that's quite common, "its like a washing machine, blackbox system" ergo, they do not feel they have to fix the mess inside. Which is acceptable in some quarters, unless there really is no vectors that they haven't taken into consideration or are hiding for business reasons.

3
0
Silver badge

> ... code analysis shown they fixed the vulnerable code in their library but didn't bump the version string up, so to a dumb analysis tool ...

To be fair, in the scenario, it's not the analysis tool that's the dumb one.

3
2

VSAN/HyperConverged

Would suggest VAN/HyperConverged systems are a different matter? Storage and Networking Software is running on the same hardware as End User VMs. So the host needs mitigations (against untrusted VMs) applied which could well impact the overall system performance?

Nice article on Spectre/Meltdown hurting performance with some clear explanations on PCID/INVPCID and all that on are competing technical site

Thanks Intel & co.

6
0

Silly question or not?

Reading all about this at the reg and elsewhere, should other products that are I/O intensive within the system be concerned?

Does Meltdown/Spectre effect the chip-set in the SSD and Hard drives directly?

Will we be seeing firmware updates to address this, and what might the performance impact be on such component level hardware?

Any Reg experts online to advise :)

0
1
Anonymous Coward

Re: Silly question or not?

Does Meltdown/Spectre effect the chip-set in the SSD and Hard drives directly?

Likely to be the same situation, is there any way to download and run your own code on those devices? If not, then spectre/meltdown won't matter. In general the only way to run your own code would be to reflash the firmware. If you can do that you have total control anyway, no need for malware.

3
0
Anonymous Coward

Re: Silly question or not?

I would assume you could theoretically check an SSD for what is in the cache. But checking through iterations of a 215k email file, to see which gets a "in table, no your not allowed access", is going to take more time (billions of years?) than a usual attacker has?

Spectre and Meltdown work on the bit/byte level so a lot easier to build a file. SSDs may only return yes/no statements quickly/slowly if the exact entire file is in the cache. A block size attack may be possible? This may also apply to network kit at the packet size and DDoS/MitM attacks waiting for speed of response/resend even when you don't know the contents of the packet. (Reg has an article lower down on the pages on Malware scanning "inside" encrypted packets!)

0
1

Embedded systems

On the whole if an attacker can run their own code on an embedded system it's game over anyway. This should require high level permissions that allow all kinds of other entertainment to happen.

If it doesn't while I don't want to make light of these issues there are other, more urgent, ones that need working on.

11
0
Silver badge

Re: Embedded systems

Exactly. If they gain access to the systems, you have bigger problems than Meltdown and Spectre - if they have gained access, they probably don't need those exploits to get at the data.

9
0
Bronze badge

Same here

We have had to prepare a statement for customers too stating that

* We run only our code. If we allowed otherwise it would be bad for you. Really bad and I mean you TheRegister reader's various communication and financial service providers

* These CPUs are old/slow enough not to speculatively do anything.

4
0
Silver badge

Our NAS runs on an Intel Xeon - so it will be vulnerable - especially when it is converted to a NAS, and world+dog have access to it.

Patch it? Non. I don't want a degraded NAS.

However, access to it will be tightly controlled, so any ne'er-do-well trying to get in will have a fiddly time doing so.

So. The risk is there, but it is one I'm willing to take - sacrificing security for performance this time round.

Remote desktops/VM's etc - all patched, not taking chances there.

7
0
Anonymous Coward

Makes sense now, but what about the future?

I can understand the argument that they aren't going to patch now because they are essentially a closed system.

However, aren't they basically setting themselves up to be maintaining and running forked systems going forward? If they are running a general purpose OS underlying the system, then they are going to have to be cross porting fixes (assuming an open source OS). The delta between the two is going to be larger and larger over time. I'd be concerned that there are going to be additional fixes (even ones that are applicable to appliance like deployments) and those vendors aren't going to be able to take them on.

0
0

Re: Makes sense now, but what about the future?

It appears that most OSes will allow you to disable these security options via a registry key or boot loader option.

0
0
Silver badge
Boffin

SANdemonium

Maybe these appliances run a stack that's all at ring 0 so there's no context switching anyway?

0
2
Silver badge

Re: SANdemonium

No, they are all running either Linux or BSD, with at least one running Windows (the Clariion/VNX controller) I'm not aware of any that wrote their entire OS including the kernel for a general purpose CPU, which is what would be required for it to run ring 0.

If you are going to go to that much effort, you'll develop your own ASIC like EMC for the Symmetrix/DMX/vMax line. Obviously that has some sort of OS (Engenuity or whatever they call it now) but it isn't a general purpose CPU and you definitely aren't going to be able to run your own code even if it was vulnerable.

1
0
Thumb Up

Sanity at last

At least some people realised that just because there's some theoretical risk of something it doesn't mean it's an actual risk in a particular context.

Meanwhile security researchers look at everything through the distorted prism of 'security' without realising that it's only one factor, and that there's more than one way to deal with a problem.

3
0

Performance impact can be mitigated if patches applied correctly

Newer versions of the KPTI (formally KAISER) patches in Linux will allow applying the address space separation only on certain processes. The separation is useful to performance, because most code stacks managing SAN and NAS machines divide execution between non critical and critical core processes, so theoretically you can apply the KPTI only to the non-critical parts which can run non-critical user facing things such as nodejs, ssh, and various other third-parties, and leave the performance-critical proprietary data path in its set of processes with KPTI disabled, and thus with performance unhampered.

1
0

Re: Performance impact can be mitigated if patches applied correctly

> applying the address space separation only on certain processes

That's interesting. Do you have any link to more information please?

0
0

Have heard this from several Appliance suppliers like Kemp Load Balancers, "its a closed system that does not allow the running of any user code so is secure" so does not need to patched.

Seems a risky stance until you think about it.

Is your storage system the weakest link in security of your data? I would imagine the unpatched Windows box that hosts the data would or a user giving up there credentials to phishing be a far easier mark then the storage system to exploit.

Storage is likely to be in the last 2 or 3% of security patching.

1
0

Closed until it is not

This is nothing to do with whether they are closed or not but everything to do with performance. All storage vendors, particularly those mentioned are all competing for the last IOP that can be squeezed out. Anything that impact that is bad and whoever does patch will immediately be at a disadvantage.

Does any of this sound familiar? Yes, it is the exact reason we are in this situation in the first place, performance is of greater importance than security.

At some point some clever person will find a way to exploit this (bearing in mind all these products are now connected in some way to the Internet with phone home etc.) and then BANG!!!!

At that point a smaller vendor will probably go pout of business.

0
1

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Forums

Biting the hand that feeds IT © 1998–2018