back to article XIV goes way of the dinosaurs as IBM nixes fourth-gen storage array

IBM is not going to develop a fourth-generation XIV storage array because an upcoming FlashSystem A9000R using 3D flash can be sold for the same cost as disk. XIV is the highly reliable, Moshe Yanai-designed array, which IBM bought in 2008 for a rumoured $300m. Unusually, the technology used clustered nodes, each using cheap …

  1. returnofthemus

    The product was rebranded as Spectrum Accelerate

    No it was not, Spectrum Accelerate was the name given to the software they decoupled from the XIV system, which coincidntally gives them a stake in that Hyped-up converged market that you keep drooling over https://www.youtube.com/watch?v=Nk0VYb3a0z4

  2. Rob Isrob

    COW vs ROW

    ROW = redirect on write

    Makes sense. COW is kinda stuffed now a days when you open up the floodgates with NVMe, all flash, etc. Most will have to go to redirect on write at some point, eh (obviously...barring architectures that are newer or get around it)? Run out of internal resources if continue down the COW path as IO thruput ballons:

    https://storageswiss.com/2016/04/01/snapshot-101-copy-on-write-vs-redirect-on-write/

    "Consider a copy-on-write system, which copies any blocks before they are overwritten with new information (i.e. it copies on writes). In other words, if a block in a protected entity is to be modified, the system will copy that block to a separate snapshot area before it is overwritten with the new information. This approach requires three I/O operations for each write: one read and two writes"

    1. Anonymous Coward
      Anonymous Coward

      Re: COW vs ROW

      You're right: ROW means redirect on write, and the article is incorrect.

      ROW doesn't work on tiered storage. SVC centres around tiering as it was designed in the days of 7K, 10K and 15K tiers, so it needs to use COW.

      in COW, If you overwrite a block of data which has a snapshot depending on it the block is first copied to the snapshot, which would otherwise just point to the primary copy, hence the term.

      In ROW, rather than overwriting the data, a new block is written instead, and the address map for the primary is just changed to point to the new location.

      There is no "snapshot area" on SVC; just two volumes. All very well, but if those two volumes are on different tiers and you redirect the block of data, it will end up putting the primary (i.e. production) copy on the other tier, which clearly isn't going to work.

      This was never a problem for single-tier systems, like AFAs, or those, like XIV, which don't use flash as persistent storage. I'm not sure how IBM intend to make this work. The whole point of SVC is to virtualise different types of storage from different vendors. You wouldn't be able to snapshot from one type to another, so your test/dev would end up being forced into the same pool as your primary storage.

      I suspect they will add it as an alternative, leaving COW in place. There's history of this: the first method of migrating data wasn't very good so they added volume mirroring as an alternative, leaving the original method in place. The original method of async replication wasn't very good so they added another one.

      COW in SVC is a terrible implementation. It was fine before the days of data reduction, but the cleanup operation you need to sit and watch when you delete snapshots can be quite tedious when you just want to go home.

  3. Anonymous Coward
    Anonymous Coward

    Is the high capacity A9000 due in Q4 of 2017, or was it Q4 of 2016?

    Wondering what was the date of Symon's email.

  4. Anonymous Coward
    Anonymous Coward

    Used to work there many years ago, but remaining anonymous....

    XIV has been a thorn in DS8000's side ever since it came into IBM. The DS8000 lot are old-shool IBMers who didn't like this upstart, and XIV was starved of funding, so most of the people who came in with XIV left the company. DS8000 props up the mainframe business, so it's never going to to go as long as they sell Z, so XIV was starved out. They took the look and feel of the XIV GUI and applied it to the rest of the storage range (which badly needed it) but the rest of the XIV code didn't fit into the legacy architectures of the older DS8000 and SVC-based stuff. It did fit with the flashsystem though, as it needed things like copy services, hence the A9000.

  5. Anonymous Coward
    Anonymous Coward

    A9000 quality is a problem for XIV customers

    I heard from IBM reseller that A9000 has serious quality issues and is 9 months behind on committed roadmap. They are apparently disabling snapshots and compression in response to system crashes in the field. He is advising people to wait and see if A9000 re-launch in mid 2017 fixes the quality problems.

    It's funny because A9000 is the first taste many XIV customers are getting of IBM-native developed storage (XIV was developed by an independent Israeli team). Old IBM storage customers who remember Shark are probably not surprised by the quality problems. IBM needs to up their game or the XIV-era customers are going to head for the exits.

    1. Anonymous Coward
      Anonymous Coward

      Re: A9000 quality is a problem for XIV customers

      That's interesting. I would have thought IBM would have had the FlashSystem quality issues figured out at this point.

    2. IBMer

      Re: A9000 quality is a problem for XIV customers

      A biased IBMer here

      A piece of advice - don't take this re-seller word next time.

      Handling FUD is never easy, but not in this case - even if you want to, you can't disable compression in A9000.

      Data reduction (of all types) is "Always-on" for A9000.

      Too bad IBM don't disclose numbers by product because A9000 is actually performing better than Sales target (and I didn't hear it from a reseller, I have seen the internal financial report :-))

  6. Anonymous Coward
    Anonymous Coward

    open source version of XIV

    having used those i wish some one would look at the technology and produce a open source version of the grid type array approach would would greatly increase performance for possibly home and SMB NAS devices.

    1. Anonymous Coward
      Anonymous Coward

      Re: open source version of XIV

      It is a take on the model that all of the cloud storage providers use. That is what Moshe was drawing on when he built XIV back in the day... the Google data storage model, but in an on prem box. You can just buy storage from Google Cloud now and get the concept in a managed model, like Google Cloud Nearline. AWS has a similar methodology with S3 and the like.

      1. Anonymous Coward
        Anonymous Coward

        Re: open source version of XIV

        Do you get the same sort of performance from Google Cloud though?

        The grid structure is aimed at two problems which affect RAID-based arrays: performance and rebuild times. With, say a 7+P RAID 5 array only 8 disks are able to serve I/O. Overall, those traditional systems will balance their I/O across all of the disks as they should all be in play if the system is loaded as a whole, but for a particular workload you're limited to one array; 8 disks. With XIV you are guaranteed that everything is spread everywhere.

        Rebuild times are arguably more of an issue. If you have large disks (people are only buying large disks + flash nowadays, i.e. the XIV model - 15K is dead) then the time taken to rebuild a RAID 5 from 7 disks to a hot spare is horrendous, especially if the array is still serving a lot of I/O. RAID 6 offers an extra layer of protection, but like RAID 5 there's a write penalty, only it's worse as an extra read/write is required every time you write new data.

        Distributed RAID has helped to overcome the performance issue, but be aware that some implementations don't distribute everywhere like XIV does, It also still suffers from the write penalty, just like it always did.

        Of course, the downside with XIV is it's hopelessly inefficient in its use of capacity: a compute node for every twelve drives and basic mirroring. Add in spare capacity and you're already below 50% efficient. Even with cheap drives you're using a lot of space for not a lot of capacity.

        Moshe's latest, Infinidat does take it a step further, using a form of distributed RAID which is more like the erasure coding you get in cloud storage. It's fixed at two parity blocks, so like RAID 6 but spread over all the disks in the system like XIV. They get over the penalty by writing a new block rather than updating the existing one.

        As for open-source, object storage implementations like Ceph will do erasure coding but they're not really suitable for enterprise block (Ceph will do block but you'll need to throw flash at it to get it to perform, plus you need an army to set it up and maintain it, relatively speaking). Can't think of any block SDS implementations.

        1. Anonymous Coward
          Anonymous Coward

          Re: open source version of XIV

          Google's Borg takes what XIV does (wide striping across a ton of nodes/drives to get performance) and does it at hyper scale. I'm not sure what their performance numbers are... but their overall response time seems to be pretty awesome so I would assume their storage latency is crazy low, just judging based on the cycle response times from Google search, YouTube, e.g. returning 600 million results in 0.4 seconds round trip from a weak mobile connection.

  7. Anonymous Coward
    Anonymous Coward

    I used to sell XIV. It was really fun to walk to tell the XIV story five years ago. The writing was on the wall even a few years ago that IBM would be moving away from disk based arrays. I haven't looked at it recently, but it still seems unlikely that Flash, especially specialized arrays like the FlashSystem, is going to be the same price as a 6 TB NL SAS array... on a capacity basis. Even a few years back, you could get a 6 TB SAS array of XIV at well under $1,000 per TB. I can see where FlashSystem would be the same price as XIV disk *per I/O* but if you need 600 TB of relatively lightly pushed storage, it is difficult to believe that you can buy a 600 TB FlashSystem for the same price as a 600 TB XIV Gen3 array with 6 TB drives. Does anyone know? Can you buy an A9000R FlashSystem at say $700-800 per TB (at any capacity level)?

  8. Anonymous Coward
    Anonymous Coward

    Had IBM wanted to do an XIV Gen4, they would have done it by now

    XIV Gen1 (the start-up): 2005

    XIV Gen2: 2008

    XIV Gen3: 2011

    I doubt XIV customers are holding on in anxious anticipation for a new product after a 6 year hiatus.

    IBM clearly gave up on this market long ago.

    1. Anonymous Coward
      Anonymous Coward

      Re: Had IBM wanted to do an XIV Gen4, they would have done it by now

      XIV has become the Mac Pro of storage systems

    2. Anonymous Coward
      Anonymous Coward

      Re: Had IBM wanted to do an XIV Gen4, they would have done it by now

      In fairness, they did just keep adding drive sizes and features to the Gen3 without renaming it. Agree with the general point though. They seem to be done with the XIV as it is commonly understood.

      1. Anonymous Coward
        Anonymous Coward

        Re: Had IBM wanted to do an XIV Gen4, they would have done it by now

        You're correct to a point, but I think it comes down to IBM not being bothered to change the marketing collateral.

        Why say that? Imagine if you had a storage system you'd been selling for a few years, then you upgraded pretty much everything (including the software).

        Would you still call it VNX? No, you'd call it VNX2, but then you work for a company that understands (and does) marketing!

        With XIV Gen 3, here is an incomplete list of some of the changes that were made over the years after the 2011 announcement:

        - yes, larger drives were made available

        - self encrypting drives launched

        - faster CPUs used

        - additional memory (cache added) per module when using larger drives

        - larger SSDs for read cache per module

        - inline compression was added to the SW function (and added as standard for the latest "version" of Gen 3)

        - 2nd CPU (and extra memory) added per module to ensure compression didn't limit performance

        None of these things were apparently enough to warrant the name changing to Gen 4. This lead to people calling the various flavours Gen 3, Gen 3.2 and Gen 3.4.

        I know of organisations that have replaced "Gen 3" with "Gen 3", but that makes it sound like there wasn't a difference - well the TB per floor tile and performance (IOPS up and latency down) had improved enormously - it was the next generation in all but name.

        Compare with Intel, who are great at marketing very tiny differences as if they make a world of difference. You don't have to explain to IT people that replacing 5 yr old Xeons with new Xeons is likely to lead to a significant performance boost/footprint reduction. "But they're still just Xeons". Quite......

        On the other point, A9000R is the all Flash XIV - but it's not a Gen 4.

        It's not just "XIV" from a marketing perspective to make it familiar to customers - it's "XIV" as it's running the "XIV" Spectrum Accelerate code. If anything, that makes it a happier transition for customers than VNX2 to Unity doesn't it?

        Or is the issue just "The Importance of Being Earnest / XIV?".

    3. Anonymous Coward
      Anonymous Coward

      Re: Had IBM wanted to do an XIV Gen4, they would have done it by now

      >I doubt XIV customers are holding on in anxious anticipation for a new product after a 6 year hiatus.

      I think a lot of them have either been convinced to move to other IBM stuff, have been outbidded by a competitor or moved onto Infinidat, which is where XIV would probably have gone had IBM funded the development rather than pulled the plug.

      1. Anonymous Coward
        Anonymous Coward

        Re: Had IBM wanted to do an XIV Gen4, they would have done it by now

        I think there is a pretty fair amount of XIV Gen3 still out in the field. I know of at least one large operations with about 20 XIV frames (I sold it). I'm not sure IBM will be able to retain the install over time.

        Agree that Infinidat is the next version of XIV, but it doesn't have the IBM logo so many XIV customers probably don't know it exists.

  9. Anonymous Coward
    Anonymous Coward

    Correction: Storwize the storage array was not Storwize the acquisition

    Storwize the company was all about data compression software for primary storage. When IBM acquired them, some IBM exec thought Storwize would be a great name for the new mid-range box in development, and re-purposed the name. I forget what IBM changed the compression software's name to, but it was kind of funny that the new Storwize storage boxes didn't come with the Storwize compression software.

    1. Anonymous Coward
      Anonymous Coward

      Re: Correction: Storwize the storage array was not Storwize the acquisition

      The first gen v7000 did come with compression eventually. It ran like a dog though, the v7000 gen 1 not having anywhere near the CPU grunt required.

      The storwize product itself was killed off very quickly. IBM was never interested and were only interested in the compression code.

  10. IBMer

    Wow, so many biased comments

    An XIV guy here

    It's funny to see those comments of schadenfreude.

    Seems like XIV is still a tough competition after all, ha?

    Sorry to let you down folks, but we are not going anywhere.

    In fact the investment IBM puts in this team is getting larger and larger every year.

    Pay a visit to our offices in Tel Aviv to understand what I'm talking about, most of you are not so far away anyhow ;-)

    There is lots of work on additional models of XIV which I can't obviously share here.

    After all, there is still no single product out there that delivers the capacity, performance and quality in this price point. BTW, when I say quality, I mean a real field proven 6 9's based on more than 10,000 systems out there and not the mambo jambo calculations that other do on Excel.

    1. Anonymous Coward
      Anonymous Coward

      Re: Wow, so many biased comments

      "6 nines"... you keep using that word.

      I don't think it means quite what you think it means.

      1. Anonymous Coward
        Anonymous Coward

        Re: Wow, so many biased comments

        It means that 9999 customers had no unplanned outage at all yet one of them lost access to their data for 87 days.

        Real-world availability figures are useless unless you see the detail. I once worked for a vendor (won't say which one) and had the job of counting up real-world availability. In some months there were people with horrendous outages, some lasting for days. Yet once you averaged it out over the installed base worldwide it averaged out to a nice cosy 5 or 6 nines.

        If you're looking to buy something, take great care in how it's designed and built. If you're clever enough (and most people in this position are) you can work out whether the vendor's claims are ridiculous or are feasible/likely.

        In defence of XIV, I did work with it for a while and it's a solidly designed piece of kit. Or at least the Gen3 was. Gen2's ethernet switches were a bit crappy, as was the lack of SSD cache.

  11. Axel Koester

    Insider: Technical differences of XIV Gen3 vs. FlashSystem A9000

    Let me offer some technical background about the differences between XIV Gen3 and its successor FlashSystem A9000, and the reasons why we changed things. (Disclaimer: IBMer here.)

    First, both share the same development teams and major parts of their firmware. Notable differences include the A9000's shiny new GUI, designed for mobile.

    The major reason why there is no "XIV Gen4 with faster drives" is that you can order faster or larger drives for a Gen3, or even build your own flavor of XIV deploying a "Spectrum Accelerate" service on a VMware farm - or deploy a Cloud XIV as-a-service. They're all identical in look and feel.

    The original XIV data distribution schema was designed to squeeze the maximum performance out of large nearline disks leveraging SSD caches. For an All-Flash storage system, that doesn't make sense.

    Plus we noticed two roadblocks in x86 cluster storage design: First, standard x86 nodes full of dense SSDs are not up to the task of driving all that capacity (plus distributed RAID) at desirable latencies; NVMe fabrics might relieve some bottlenecks in the future.

    Second, even the best SSDs get depleted after heavy use, and we want to avoid having to deal with too many component failures at once. Also we preferred a design without opaque 3rd party SSD firmware mimicking disk drives, which would have serious limitations in lifetime / garbage collection control / health binning control etc.

    The A9000 therefore leverages FlashSystem's Variable Stripe RAID, which is implemented in low-latency gate logic hardware. Think "Variable Stripe" as "self-healing" - a feature known from the XIV, but with RAID-5 efficiency. On top lays a data distribution schema that uses a 2:1 ratio of x86 nodes to Flash drawers, or even 3:1 when it's just one pod (for lack of workload entanglement, among other things). The result is a system that runs global deduplication + real-time compression at latencies suitable for SAP databases and Oracle. Which also implies that incompressible or encrypted data is not ideal - so it's not a system for ANY workload. But it's definitely not restricted to VDI, like some others. I'd encourage everyone to run simulations on data samples.

    1. Anonymous Coward
      Anonymous Coward

      Re: Insider: Technical differences of XIV Gen3 vs. FlashSystem A9000

      I think the issue though is going to be cost. I don't buy that you can purchase 1 PB, for instance, of A9000 for the same price as 1 PB of XIV Gen3 in the 6 TB drive model. It would be pretty crazy if IBM was selling TLC Flash at $700 per TB. I believe that A9000 is the same price at XIV Gen3 per I/O, but that doesn't really matter as most people are not going to be scratching the surface of the IOPS in A9000 but still need the capacity. That was IBM's old sales pitch... look at cost per IO not cost per capacity. That doesn't make sense though for the vast majority of workloads where you are not IOPS constrained. Relative cost per TB was one of the major selling points of XIV.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019