back to article The 4 stages of All-Flash storage: Denial, anger, bargaining... and integration

Last week saw yet more stories and announcements on all-flash storage. IBM released new FlashSystem products based on XIV and an all-flash version of their mainframe/enterprise platform, the DS8888. I had another briefing from Violin (although that didn’t reveal anything new) and rumours started to swirl that X-IO Technologies …

  1. ecarlseen

    Of course most of us will wind up all-flash

    At least for the fat portion of the bell curve of user needs. The cost of flash storage is dropping faster than our needs for additional storage are increasing, so the lines will inevitably cross - for the vast majority of cases within five years. It will be driven more by cost and reliability than by performance. The vast majority of the applications we manage gain nothing significant from moving to all-flash. Even tiered storage doesn't make a huge difference in most cases because it's been reasonably cheap to throw RAM cache at various problems. Eventually software will be re-architected to take advantage of ultra-high-speed persistent near-line storage like Intel's new offering, but in the mean time it's useful only in edge cases (of course there are a LOT of edge cases in a market this size).

    1. John Sanders
      Holmes

      Re: Of course most of us will wind up all-flash

      Whatever solution wins will be the most open/standard and the one that requires re-writing the least software.

      Anything else is just proprietary crap.

      1. Androgynous Cow Herd

        Re: Of course most of us will wind up all-flash

        Bold statement. I think first you should define "Wins"

        Then you should define which market you are talking about that "Open/Standard" is the defining criteria that will trump the specialized feature sets in the 3rd and 4th Gen Flash devices.

        Then consider the relative market position of the various examples of "Proprietary Crap".that dominate the modern datacenter, and the relative strengths based on feature set compared to the best "Open/Standard" platform you can think of.

        For a specific subset of use cases, you are exactly right. But most storage companies are not going to bother with that subset anyway. The customer profile that embraces that standard are the same ones that believe they can "buy a bunch of SSDs and an enclosure at Fry's, install FreeNAS and have the same thing". They are both wrong and very tedious to talk to. The storage industry term for those prospects is "Unabombers".

        For the storage manufacturer, there is much more money selling to Petrochemical and Financial sectors (as an example) where a bold statement like yours would be considered naively laughable. Even the big players that embrace an Open Sores approach tend to have proprietary platforms for datacenter level storage (SAN/NAS), either ones they bought from a vendor based on criteria like feature set and support offering, or highly customized code to run on a fairly specific hardware stack.

  2. Zippy_UK

    BIG BROTHER NEEDS FAST, UNLIMITED STORAGE

    Programmable too. Halcyon days are on there way for CIA, MI5, ...

  3. Anonymous Coward
    Anonymous Coward

    Agree, and I think "integration" will ultimately mean commodity. If you are integrating with VSAN or the MS version, you do not really need most of the traditional storage software functionality in the array, you just need the hardware. It is difficult for a storage provider to integrate with VMware, now also a storage software provider, without losing their storage software layer. VMware and Hyper V are going to do to storage what they did to servers - Make them commodities where it is all more or less the same so just give me your price.... assuming it isn't all in the cloud before that happens.

  4. Anonymous Coward
    Anonymous Coward

    And where is resiliency? Are we so blinded by "flashy marketing" that nobody has the time to see or even care that when two SSDs fail in an XtremIO, performance and potentially reliability is at stake? Anyone cares that when a whole FlashSystem 900 goes down in a A9000R.. everything is affected?

    POCs and Lab Tests are usually the ones that show these flaws...

    I think that this is stage-5, and that is "holy-crap" moment, "we didn't take reliability into account".

    1. Storageguy84

      A9000R Resiliency

      Resiliency is exactly the point of the A9000R. Each A9000 has 4x Flash 900 and 8xcontrollers.A single Flash 900 failing will not affect performance. For a single Flash 900 to fail which is a dual controller system itself with 12 flash modules. You would need 2 flash modules fail at the same time, also each flash module runs variable stripe raid, as well we do raid 5 across all the modules with a spare and a parity module. You would have to suffer at least 4xFlash modules fail at the same time to take down 2x Flash Systems at the same time and the A9000 would still keep on ticking. Keep in mind we are now talking about an absurd amount of failures and you could always have two A9000R if your still concerned.

      1. Anonymous Coward
        Anonymous Coward

        Re: A9000R Resiliency

        thanks for saving me the time from having to write the correction ;-)

      2. Anonymous Coward
        Anonymous Coward

        Re: A9000R Resiliency

        It is XIV. I liked XIV, but the issue I always had with it was the issue I have with every cluster or otherwise dispersion architecture. Yes, it can survive, often with minimal degradation, any normal component failure. There is a definite single point of failure though - the cluster or dispersion management software itself. If XIV or Accelerate itself, the data placement management software, has an issue, then you have a giant disaster on your hands. The odds are low, but the risk is high. You are betting that IBM's Accelerate or XIV software is perfect and bug free. 99% of the time this is not going to be issue, but for one out of a 100 it is going to be a major issue.

      3. Anonymous Coward
        Anonymous Coward

        Re: A9000R Resiliency

        "A single Flash 900 failing will not affect performance".

        This is incorrect, if a single flashsystem 900 dies in either a A9000 or A9000R, all data is lost in the system, as data is striped across all modules, it is not mirrored like it is with XIV (where you can lose a whole module without data loss). The XIV code on the A9000/A9000R does not do any data protection, this is handled solely by the flashsystem 900s. The architecture of A9000R is nearly identical to the way xtreme IO works. On A9000/A9000R you can lose a controller (or controllers with A9000R), but you cannot loose a flash 900 module without data loss.

        Bear in mind, this isn't a problem, as the above poster mentioned, the flashsystem 900 is a fully redundant system in itself anyway.

  5. skenniston

    It's Not About the Media

    Chris,

    I couldn't agree more with you sentiments around the 4th stage. While most every solution that is developed goes thought some evolution of innovation, when it comes down to it, the clients needs are the same. They need highly reliable, as fast as practical, easy as possible to manage, scalable storage solutions. And in that "as easy as possible to manage" comes the ability to integrate with applications that we use every day to make our lives easier. (To the comment above where it is mentioned that "it is difficult for a storage provider to integrate with VMware" - it really isn't, we have done it at INFINIDAT and we are helping make customers backup lives, for example, 10,000x easier with integration into vCenter and at no additional charge by the way.)

    In order for storage to get 'better' its not about the media. It is about 4 fundamental aspects of storage to help clients not have to compromise on performance, reliability, and scale - which means you get all of these features for one low price (along with all the storage features you need). In order to do this you need:

    1) A server to disk ratio that is beyond what systems do today - it needs to scale way beyond the 24 to 1 ratio of drives to CPU in today's hyperconverged space, the 12 to 1 XIV uses, and the 4 to 1 in what google is building for their scale, - having less cpu, power supplies, etc to drive the disks means less space, less power and cooling and fewer components which all help to have a much better TCO.

    2) System efficiency needs to be much better than it is today - having traditional RAID which using as much as 50% overhead or more and RS coding which drives down performance is not going to help clients get what they need. RAID means more drives for failure scenarios and as data scales so do the number of drive and hence the number of failures.

    3) The ability to take advantage of high capacity drives. I should state in such a way that you can achieve good performance - both for reads and writes but also for when there is some sort of a drive failure. It is a little known fact that the vendors who make spinning media are still investing a great deal in these devices to help client store more, for longer periods of time on line.

    4) The ability to best take advantage of the ratio of flash to spinning media. When done correctly systems can achieve the necessary performance for the applications without breaking the bank.

    These fundamental concepts are hard for traditional storage vendors to think about without completely re-architecting their systems which is too costly for them to do. However once a system like this is available, you can then start thinking about the integration with the applications and all of the things a system like this is able to do.

    In addition as data continues to scale, which it will, without these new fundamental concepts all of the traditional storage systems, REGARDLESS of media type, break down. So managing data at scale requires so fundamental rethinking of traditional storage... Slapping some 'older' software (XIV) behind some flash / SSD disks is NOT going to help customers.

    New scalable systems that work more tightly with the applications like VMware or especially now OpenStack are more and more important. It is time to help clients be competitive - what does that mean, it means stop charging them millions of dollars for disk and help them figure out how to use that money for smart people that can collect more data, analyze more data and build new businesses - that is what will help.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like