back to article Always late to the party, IBM reveals itself to be NVMe fanboy

IBM says it's developing systems with NVMe across its storage portfolio, and wants to ignite an industry-wide leap in system performance. Its NVMe-using products will come to the market in the first half of 2018. That's a year away. Apeiron, Excelero, E8, Micron and many others already have NVMe SSD supporting systems and are …

Gaming leads the way

Standard in high end gaming gear for nearly two years now. Funny how innovation in IT is driven by consumer demand, while most CIO's are happy with vintage tech.

1
0
Anonymous Coward

Re: Gaming leads the way

True. IT is the only industry in which 'consumer' means cutting edge, extremely advanced and 'enterprise' means legacy, outdated, expensive. Mainly because most consumers always want the next best thing, no concerns about change... and enterprises just want the status quo and never to have to change. Probably going to have to change at some point if these enterprise businesses want to have a chance at competing when the Amazon's of the world enter their markets.

0
0
IT Angle

India Business Machines?

India Business Machines? Didn't they used to be a technology company or something?

1
1
Silver badge

Re: India Business Machines?

Nah - Ignorant Boring Morons who always led from rear, very distant rear. They relied on marketing and brand recognition back in the days of "No one got fired for buying IBM". I can not think of a product in the last 40 years where they were not a follower.

1
2
Anonymous Coward

IBM's storage offerings are legacy, proprietary, and complex.

They have to make an acquisition or they will be dead in the water for storage, as what they have now is not something that can be turned into NVMeF. Having SVC in the data path isn't helping either. SVC in the control plane only might be useful.

1
1
Anonymous Coward

IBM's storage division does not have the funding or the support from above to develop a new product. They buy products in nowadays or cobble existing products together. SVC was the last thing they developed themselves from scratch and that development decision was made back in the 90s.

NVMe is a game changer and proves that SSDs really weren't. The big difference is the latencies involved are now creeping into the territory of the interconnects between storage devices. Despite what many will say, enterprise storage is still necessary for most businesses; it can provide the resilience needed and implement important features such as consistency groups across applications, replication and so on.

You mentioned SVC being in the data path, something which has always been capitalised on by IBM's competition as adding latency, but in reality it only adds a couple of hundred us in most cases - nothing when the average latency is in the order of milliseconds, and still fairly insignificant using legacy flash (yes we're now talking about flash as legacy).

With shared storage, NVMe will end up being used as a read cache by most vendors and they will either try to put it in the server or in the storage as close to the server as possible. There's no need to cache writes this way because DRAM is faster still and works fine now. The difference as always will be in how to ensure that the right data is in the NVMe at the right time and vendors will come up with inventive ways for doing that. I'm not sure that IBM is up to that challenge, based on past experience. In fact, I'd say the same for most of the big legacy vendors.

2
0

To clarify, SVC in the data path isn't a problem for latency or features for customers, it's a problem related to vendor lock in, complexity, etc. SVC technology is decent, and has a lot of uses. Many enterprises I work with are unhappy about having to use SVC to get data services, and every storage offering IBM has uses some form of SVC, full blown, stripped down, embedded, whatnot, in their platform and customers want to move away from that.

NVMeF promises to fully decouple data services from JBOF in a granular fashion, removing the legacy monolithic stack limitations, while squeezing even more latency out of the flow. You're correct, IBM can't build that. My comment was really just that IBM need to acquire the technology and do it immediately, because they are very late in coming to this realization, and their current proprietary flash offerings are now legacy.

0
0
Anonymous Coward

Agree on the part about IBM needing to acquire to innovate in storage. XIV, TMS, etc has all been an acqusition. Any future products will probably need to be acquired... if IBM wants to spend money on storage. I can't see them ever spending billions on a storage acquisition again. Not necessarily unreasonable though because SAN storage is dying. The only reason people do it today is because that is the way they have done it and most large businesses change anything at a snail's pace. You can definitely just cluster out commodity storage servers to get performance, reliability, much greater flexibility at a lower cost as is done by Google, fb, any leading company in infrastructure tech. Preferably do it in a cloud so you can use something like Google's petabit/s interconnect network instead of paying a fortune to run at 10g.

0
0
Anonymous Coward

Only cool if your'e a startup

I'm sure they're just tired of being asked if they have an NVMe strategy so they have to make a public statement. Just because the cool kids like Pure have statements about planning to support NVMe doesn't mean everyone else is asleep at the wheel.

News flash, every commercial storage offering has some form proprietary technology. Thankfully, they are adhering to standards around host protocols and interconnects like SAS and NVMe. Also, with SDS, at least the hardware can be open.

1
0
Anonymous Coward

Re: Only cool if your'e a startup

"Also, with SDS, at least the hardware can be open."

It's such a lame sales pitch though. "With this really expensive software, you can run any make of really cheap hardware you want!" It is the VMware story. Give us $x million for the software and you could be saving literally thousands on commodity hardware.

The way to do they is just to containerize on Kubernetes/Docker and then everything is open source, hardware is a commodity... with substantially more sophisticated tool sets than anything commercially available.

0
0

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Forums

Biting the hand that feeds IT © 1998–2018