back to article NetApp gives its FAS range a 4 MILLION IOPS dose of spit'n'polish

As we foretold in May, NetApp has completed the revamp of its unified storage FAS arrays with FAS2500s at the low-end and a monster FAS8080 EX at the top. We got the basic details, except for the FAS 8080 which has 36TB of (Virtual Storage Tier) flash and not the 18TB we thought. Apart from that it has up to 5.76PB of capacity …

COMMENTS

This topic is closed for new posts.

Page:

  1. Anonymous Coward
    Anonymous Coward

    Software

    Sure - NetApp has improved hardware and it was needed. The software needs to catch up. They are still running traditional RAID (okay it has double parity) but as drives get larger, rebuilds are going to drag on for over a day on these. Put in a 6TB SATA drive here, have it fail response times will be skewed for a while.

    1. TheVogon

      Re: Software

      And you would replace RAID DP with ?

      1. M. B.

        Re: Software

        The performance of the RAID-DP implementation is literally one of my favorite things about NetApp.

        1. Anonymous Coward
          Anonymous Coward

          Re: Software

          Raid DP can perform very well, but compared to what ?

          If you're sizing Raid DP for performance you typically have to factor in additional spindles vs other implementations. I'm fairly sure that if you over sized in the same manner for other kit you'd probably see similar results on Raid 6, at least for a decent implementation of Raid 6.

          But lets face it Raid-DP is all Netapp have to offer (that's not the case for others) and so it has to be made to appear to fit every use case. Before anyone starts the "you don't understand WAFL" argument, yes I do and pretty much every decent system on the market today offers larger write caches, write coalescing and full stripe writes, Netapp really only really holds an advantage for dual parity configurations on a new or underutilized system.

          This need to oversize for performance on RAID-DP is why Netapp have always been keen to get the other Vendors to quote Raid 6 (it was never really about availability) as it levels the playing field somewhat. No problems with Netapp Raid-DP or any other dual parity implementations, but lets not prescribe miracles to them and at some point very soon even dual parity won't cut it.

          1. Anonymous Coward
            Anonymous Coward

            Re: Software

            Worth mentioning and what many people overlook is that Netapp absolutely needed Raid-DP because their old Raid4 implementation just couldn't provide safe scalability for their aggregate model. Other vendors didn't need this as they either stuck with traditional raid sets or used these to back disk pools and more recently employed micro raid to overcome the scalability issues. As such Raid DP was never really a choice, it was always pretty much a mandatory requirement.

            1. SPGoetze

              Re: Software

              Hmmm, seing that RAID-DP was introduced with ONTAP 6.5 a good 10 years ago and only became the default in ONTAP 7.3 (2009) I can't quite follow your reasoning.

              I DO follow the reasoning, that being protected while rebuilding a disk (something that RAID 4/5/10 doesn't provide) with only <2% performance penalty (which you probably won't notice) is something you should do by default.

              Scalability, the way I understand it, was provided with the advent of 64-Bit Aggregates in ONTAP 8.x, especially 8.1+.

              1. Anonymous Coward
                Anonymous Coward

                Nice spin, but they needed DP to allow aggregates and flexvols to scale safely across more disks nit more capacity. With Raid4 and traditional spindle based raid, every time you double the number of disks you also double the likelihood off a dual drive failure. So instead of revamping the Raid implementation they simply added another parity drive.

          2. SPGoetze

            Re: Software

            Well, *Data ONTAP* offers only RAID-DP (and RAID 4, and RAID 0 (V-Series), and all of them mirrored, if you'd like). The 2% performance penalty (vs. buffered RAID-4) should be less than one additional disk...

            If you want performance with less Data Management Overhead, compare with the NetApp E-Series. It offers a variety of RAID schemes, plus 'Dynamic Disk Pools' (RAID-6 8+2 disk slices) which dramatically reduce rebuild time and impact.

            Or get a hybrid ONTAP config (FlashCache / FlashPool). I haven't seen a too-many-spindles for performance NetApp config in a long time...

            1. This post has been deleted by its author

            2. Anonymous Coward
              Anonymous Coward

              Re: Software

              Raid 0 - because with vSeries you rely on someone else to handle the backend raid calculations, if this were truly Raid 0 no one would touch it, so lets; no pretend it's a choice.

              Raid 4 - Single parity with a dedicated parity drive, oh my would Netapp even recommend this after all of those years banging on about dual parity. Even so it's not scalable for large aggregates, so pretty much never used these days, again not really a choice.

              All of them mirrored - Is a bit of misdirection, what you mean is replicated to another array so lets not pretend this is something to do with Raid, it's another bit of software.

              Again more relative marketing statements - 2% performance hit vs what ? Netapp's Raid 4 implementation or other vendors Raid 5 or Raid 1 ? and where is that hit being taken, backend disk or at the CPU with additional parity calculations or both ?

              "If you want performance with less Data Management Overhead, compare with the NetApp E-Series"

              What ? this article is all about FAS & Ontaps new go faster stripes and its ability to compete with All Flash Arrays. Yet your telling us that a traditional dual controller block storage box (E-Series/ex LSI), that isn't even a particularly fast flash platform still outperforms FAS ?

      2. Anonymous Coward
        Anonymous Coward

        Re: Software

        Reed Solomon codes -- http://www.cs.cmu.edu/~guyb/realworld/reedsolomon/reed_solomon_codes.html

    2. LarryF

      Re: Software

      Agree with TheVogon - what would you replace RAID-DP with? Data Dispersal and Erasure Coding have their problems too. Nice thing about NetApp is that as soon as a drive hiccups, its data is automatically copied to a spare drive - avoiding any RAID rebuild time at all. Most drives die slowly, not suddenly. If your software is smart enough to detect this, you can use it to your advantage.

      Larry@NetApp

      1. Anonymous Coward
        Anonymous Coward

        Re: Software

        No need for erasure coding just yet, but they need to move away from the clunky old spindle based raid, even if it's dual parity it's still a decade or more out of date and completely inflexible for today's workloads.

        Some form of micro raid that doesn't tie aggregates or spares to specific physical disks and can track and rebuild only used blocks, like 3PAR, XIV, Compellent and even EVA to some extent years before. That way data mobility becomes simple and if you do it right you can reduce rebuild times and also increase availability (at least statistically) over traditional raid.

        This stuff is under the covers and so not typically considered as part of a product selection criteria, but having data structures pinned to physical disk constrains the flexibility of pretty much every higher level feature the vendors then layer on top.

      2. Anonymous Coward
        Anonymous Coward

        Re: Software

        Or do what Nimble Storage does... smarter rebuilds.

        Or do what Panasas does... file level RAID.

        Or do what HDFS does

        Or do what GFS does

        Larry@NetApp -- come on. Don't sell us old ideas in a new way.

    3. This post has been deleted by its author

  2. Anonymous Coward
    Anonymous Coward

    Why would anyone go with cash burning startup

    It seems from a performance or feature standpoint. There doesnt seem to be that much reason to go with a startup vs an incumbent vendor. It looks like price wise they are all converging as well.

    comments ?

    1. TheVogon

      Re: Why would anyone go with cash burning startup

      "There doesnt seem to be that much reason to go with a startup vs an incumbent vendor"

      Proven reliability and feature set spring to mind. If money was no object and I needed enterprise class storage then I would almost always buy EMC V-Max because it is hands down the most capable, powerful and flexible product. There are niche capabilities that you sometimes need other products for but V-Max does the most in one package imo. However in the real world you need to consider what you are getting for your money too, and then other vendors come into the picture.

    2. Anonymous Coward
      Anonymous Coward

      Re: Why would anyone go with cash burning startup

      Well it depends how many systems and how much it costs to get those 4 million IOps. Also are they real numbers or marketing numbers derived from a 100% cache hit rate ?

    3. Anonymous Coward
      Anonymous Coward

      Re: Why would anyone go with cash burning startup

      Depends on the incumbent vendor and it's capabilities. Consistent low latency, very high IOps and data compaction technologies seems to be the startups real selling points, but they lack many enterprise features which only time can hone.

      Conversely most of the traditional incumbent vendors are hampered by none flash optimized architectures, which although they may be able to achieve monster IOps have trouble guaranteeing consistent latency and the levels of capacity efficiency through inbuilt compaction.

      For all the 4 million IOPs touted here, that will be a very specific and very expensive configuration unlikely to be used anywhere outside of a test lab and the IOps numbers alone tells us nothing about the test criteria or kit required. They may as well have said 5 or 6 million because no one can really question those numbers and they aren't really required to provide proof.

      1. SPGoetze

        Re: 4 Mio IOPS probably not too far off the mark...

        I'm fairly sure they arrived at the 4 Mio IOPS number by taking the old 24*6240 SpecNFS results (https://www.spec.org/sfs2008/results/res2011q4/sfs2008-20111003-00198.html) and factoring in the increase in controller performance.

        The old result had 3 shelves of SAS disks per controller, so there's no unrealistic expensive RAID 10 SSD config required, like with other vendors results. Also 23 of 24 accesses were 'indirect', meaning that they had to go through the 'cluster interconnect'. pNFS would have improved the results quite a bit, I'm sure.

        The old series of benchmarks (4-node, 8-node, .. 20-node, 24-node) also showed linear scaling, so unless you'd saturate the cluster interconnect - which you can calculate easily - the 4 Mio IOPS number should be realistic for a fairly small sized (per controller) configuration. Real life configs (e.g. SuperMUC https://www.lrz.de/services/compute/supermuc/systemdescription/ ) will probably always use less nodes and more spindles per node.

        1. Anonymous Coward
          Anonymous Coward

          specNFS doesn't measure IOps, so the number is meaningless.

        2. Anonymous Coward
          Anonymous Coward

          Re: 4 Mio IOPS probably not too far off the mark...

          So what you're saying is that you think they made the number up ?

  3. This post has been deleted by its author

  4. Anonymous Coward
    Anonymous Coward

    These figures may be impressive but NetApp has become so ridiculously complex, especially cDOT and the mind-blowingly restrictive compatibility matrix that there are simpler and more cost effective solutions available.

    NetApp is just no longer unique and in many cases prohibitively expensive.

  5. Anonymous Coward
    Anonymous Coward

    No mention of IBM in any of this ?

    Another El-Reg article

    http://www.theregister.co.uk/2014/06/13/gartners_allflash_array_report_has_ibm_as_number_one/

    says IBM are the leading flash vendor. Yet no mention in the article or the comments. Are XIV and V7000 seen as last generation ?

  6. HCL

    Head HPC Bussiness Development HCL, India

    Seems to be a very promising product. Particularly in HPC arena if they do away with the need for storage nodes, by interfacing this directly in the compute cluster over multiple IB ports, this product will be a killer.

    1. Meanbean

      Re: Head HPC Bussiness Development HCL, India

      For HPC and ultra low latency high performance check out E-series which has IB

  7. cBells

    How much would it cost to fill an 8000 series with all SSD Flash drives? Sounds awesome yet outrageously expensive...if only they could offer inline deduplication to reduce the amount of SSDs required to lower the cost per gb. Where does XIV and XtremIO sit compared to this offering?

    1. LarryF

      Post processing dedupe has its advantages

      Post-process dedupe has a lot going for it, number one being that it doesn't get in the way of a CPU thats busy with server I/O requests. Number two - since you have more time to process hashes you can add data integrity checks to avoid hash collisions all together. And...if you schedule a dedupe run each night, the capacity overhead required is surprisingly small.

      Larry@ NetApp

      1. Anonymous Coward
        Anonymous Coward

        Re: Post processing dedupe has its advantages

        More mumbo jumbo to make Netapp's post process dedupe technology seem relevant.

        The all flash arrays use inline dedupe today and despite the above claims still manage to outperform an equivalent FAS configuration. The data integrity comment is also a ruse as they all either provide a byte compare on match to avoid collisions altogether or use multilayer hashing to check and correct data integrity issues after ingest, some do both and all at much faster speeds and lower latencies than Netapp FAS can provide.

        With post process If you have 50TB's of data to ingest you need 50+TB's to land it in (including metadata) so what did you save upfront ? Also if you ever need to recover your data into a post process dedupe system, you're likely going to have to stagger that recovery between dedupe runs to ensure you have enough space available to accommodate the data.

        Dedupe is good to have in any system, but post process is the least preferable in the world of flash, just ask Netapp when they eventually launch FlashRay :-)

      2. AndrewDH

        Re: Post processing dedupe has its advantages

        If you can't handle inline de-duplication without impacting performance then clearly as you have stated post-process de-dupe would be preferable.

        However if you can handle inline de-dupe without impacting performance and a number of vendors but not NetApp can then post process de-dupe is a poor idea.

        Of course if you are de-duping anything on spinning disk based storage then you are also potentially randomizing the storage that you DBMS etc has just tried to write out sequentially making your performance much worse. Not a problem with Flash, but more of an issue with Hybrid flash or Disk based devices.

      3. Anonymous Coward
        Anonymous Coward

        Re: Post processing dedupe has its advantages

        Good lord man! Are you still in the 2000s? Flash enables in-line dedupe for so many companies. Just because Netapps engineering can't change cost to accommodate doesn't give you the excuse to use CPU requests as an excuse. Sometime it is best to just dump the old code and avoid the taxes that comes with it. EMC did that with Clariion/VNX and now is doing it again with XtremeIO. HP did that with 3PAR (vs. EVA). How about some ONTAP for the 2010's action?

    2. Anonymous Coward
      Anonymous Coward

      XIV is in the same boat as Netapp in that they're not really flash optimized solutions, but both can quite happily take advantage of flash technology.

      In XIV's case it's has limited scale and hence performance and supports only a read only cache today, it's claim to fame is simplified management which seems to spring from the fact you have extremely limited choice in how things are configured.

      Netapp has much higher scalability, flexibility and many more features but it's active/passive architecture is getting very long in the tooth and cluster mode only really masks the problem rather than solving it.

      XtremeIO is a wait and see technology for me, it has very limited features, no online upgrades, rush to market features such as UPS backup for DRAM cache etc and at present it looks like some brave souls are paying EMC for the privilege of beta testing the product.

      The more mature players such as Pure,Solifire etc seem to be in better overall shape in terms of flash optimized and product maturity, but they again lack enterprise features. If I needed AFA performance but wanted enterprise features then IMHO the only vendor who can really supply that without middleware in the form of additional software or appliances is HP with their 3PAR 7450 array.

    3. StorageEngineer

      At least what I know that with this upgrade, NetApp is offering hybrid flash arrays at same price as before upgrade. Same should be true for all SSD and note that NetApp did sell all flash before. It seems that they woke up now to optimize and sell it in the market.

      Yes, you are right inline deduplication is missing peice with NetApp FAS arrays. But they do offer offline dedupe which is better from performance perspective (compared to inline) but delayed. They better come up with inline dedupe.

      Otherwise data management feature perspective, they are king. EMC is also same boat with their high end arrays. If these two companies come up with decent IOPs (say top 10) on their traditional systems with all flash (not xtremeIO/EF series), it will be unmatched other than price. Pure is desperately trying to match the data management capabilities but it won't happen overnight. It takes time to evolve.

      1. Anonymous Coward
        Anonymous Coward

        "But they do offer offline dedupe which is better from performance perspective (compared to inline) "

        No the all flash boys doing inline today completely outperform FAS and VNX2 with or without their post process dedupe so you can't claim that it impacts performance just because they can't do it..

        " these two companies come up with decent IOPs (say top 10) on their traditional systems with all flash (not xtremeIO/EF series), it will be unmatched other than price."

        But that's the problem, their architectures are underpinned by a legacy software stack which means they just can't just throw flash at the problem. This is made worse by the fact they're essentially active passive solutions with no ability to scale out and trying to retrofit scale out to an existing architecture whilst maintaining low latency is a fools errand. This is precisely why EMC had to buy XtremeIO and Netapp are developing FlashRay !

        Netapp are taking a leaf out of EMC's play book and hyping the fact they can now scale across cores, It doesn't solve the problem in any way, but it buys them some time until they have something more competitive. As for Pure et al I'm not really sure what remains of many of the startups value props now that HP have introduced iline dedupe to their 3PAR all flash array, it does all of the above and then some and also has killer data services.

  8. MRensh3

    AFA is a loose term nowadays...

    Legacy SAN's filled with SSD don't make it an AFA, as EMC understood when they tried selling their first "AFA" with the VNX-F (VNX with just SSD's). They at least smartened up and bought XtremIO, and are now destroying the likes of Pure and other "AFA" startups with architecture truly built to support all SSD. When is NTAP going to come out with a real AFA?

    1. Anonymous Coward
      Anonymous Coward

      Re: AFA is a loose term nowadays...

      Tsk Tsk EMC are destroying no one with what amounts to a public beta of XtremeIO. The couple I've seen in the wild were only really taken on board because they were effectively sweeteners for a bigger deal.

      1. MRensh3

        Re: AFA is a loose term nowadays...

        A Public Beta? They bought XtremIO 2 years ago and have been pouring R&D into it ever since. After the first month of GA it became the #1 AFA in regards to revenue achieved. Sure they are still waiting to integrate compression and replication into the OS but they'll be there soon. Even without that, the architecture they've built this product on (Scale-out vs Scale-up) makes this an easy choice for any company looking into the future vs NTAP, XIV, Pure, etc.

        1. Anonymous Coward
          Anonymous Coward

          Re: AFA is a loose term nowadays...

          Just repeating the marketing hype won't make it so, the revenue numbers at this point are meaningless as the market wan't fully defined when EMC made these statements.

          http://www.thegurleyman.com/field-review-xtremio-gen2/

          Public Beta ?

          XtremIO sharing everything can mean more than just the good stuff. In April, ours “shared” a panic over the InfiniBand connection........ but it was production-down for us"

          Scale out ?

          "True to Justin’s review, XtremIO practically scales up. Anything else is disruptive"

    2. Nick Triantos

      Mental Investment Required

      Don't compare the SW architecture of a VNX to ONTAP because If there was a storage police, you'd be in jail by now.

      However, everybody needs to make a mental investment beyond the 140 tweeter chars and the soc media noise and I'm willing to help with that. Here's a good starting point in order to understand a few things about "real" AFAs vs ONTAP plus a few other things i'm quite sure you had no idea.

      http://storageviews.com/

      Disclosure: NetApp Employee

      1. Anonymous Coward
        Anonymous Coward

        Re: Mental Investment Required

        Hehe that's just like the VNX2 launch.

        Yay finally after a decade we've managed to optimize for multicore CPU's, even though we've been telling you for years that we go twice as fast with each product release.

        This marketing mumbo jumbo is nothing more than an effort to stay relevant in an industry moving to flash and for all Netapps great software, their FAS architecture can no longer cut it. If it could they wouldn't push E-Series for performance nor would they be developing FlashRay.

      2. mtuber

        Re: Mental Investment Required

        The internet produces more fools and trolls by the minute than your ability to educate them.

        It was a good post man and those of us who care found it very enlightening.

        1. Anonymous Coward
          Anonymous Coward

          Re: Mental Investment Required

          No it was an effort to make it appear Netapp had provided some unique innovation to optimize these products for flash. Whereas the reality is they're using the same multicore spin EMC did with the VNX2 release to make their products appear relevant in this market until they have something better to offer.

          1. mtuber

            Re: Mental Investment Required

            You choose intentionally to ignore how ontap writes which is fundamentally no different than how afas do it. So while they made the changes for multicore, the underline fundamentals are the same as an afa which is NOT how the VNX does it. so what he is really saying is "if i write the same as an afa, and i have multicore support, why would buy an afa?"

            I guess HDS, and HP are also "stupid" for modifying their code...

            1. Anonymous Coward
              Anonymous Coward

              Re: Mental Investment Required

              Dude.....,Yes I did intentionally ignored that because it's marketing BS, so WAFL's write layout and new multicore scaling combined now qualify the FAS platform as an AFA system ? Meh If there were really more to that statement than marketing to assist with the relaunch of more of the same old, then why on earth would Netapp need to go off and build FlashRay from scratch ? Besides I'm not comparing it to VNX which for all intents and purposes has hit the same dead end as FAS, that's why EMC are plugging XtremeIO for all they're worth.

              A few years ago Netapp marketing were harping on about how WAFL's write layout was also ideal for flash wear levelling and how they could handle the process much better than rivals. They completely ignored the fact this functionality was built into the drives bios and so completely transparent anyway. It didn't matter where WAFL put the write because the drive would relocate it anyway :-) Same for post process dedupe, hey it's great, except all that shuffling of blocks after the fact causes write amplification (wonder how they handle the reallocate command on SSD) and additional wear to the drives vs inline which does neither and which Netapp don't have.

              More marketeering but if you choose to be duped so be it.

            2. Anonymous Coward
              Anonymous Coward

              Re: Mental Investment Required

              HP & HDS have actually done some clever things around flash integration but their architectures are vastly more integrated than either VNX or FAS so as a whole their systems have much more scope for the future. Close coupled scale out architectures, multi controller and multi processor scaling for many years, symmetric active active processing on controllers etc as well as all the other flash enhancements. In the case of HP they even have a dedicated product with specific internal tweaks for all flash, which also now supports inline dedupe as well as all the other inline compaction tech they already had. Theses very real embedded flash enhancements now allow them to offer a unconditional 5 year warranty even on cMLC drives.

              1. mtuber

                Re: Mental Investment Required

                Like what? Name them please. What are the "clever" things that they have done? netapp also offers unconditional 5 year support and if netapp were to offer cMLC drive support would that change your stance..? Be prepared to insert your foot in your mouth with regards to cMLC support on FAS...

                1. Anonymous Coward
                  Anonymous Coward

                  So Netapp offer a 5 year unlimited warranty for eMLC & cMLC regardless of whether the drive fails or exceeds it's overwrite limit ? then show me that unconditional statement.

  9. Anonymous Coward
    Anonymous Coward

    BTW I never claimed Netapp couldn't use cMLC, that appears to be your own paranoia kicking in. Since you asked for some examples read this http://www.techopsguys.com/tag/3par/

    BTW those are just a few of the more overt features, there are also.lots that.just don't get called out because they're just part of the architecture. e.g Symm AA, multicore and multinode scaling of worloads, wide striping, distributed virtual spares, ASIC based thinning etc. I can provide you HDS info as well if you'd like.

    1. mtuber

      I think you have no idea what you're claiming because you keep changing the conversation....dude

      and btw...how's ASIC use and wide stripping different with SSDs than with HDDs? It's not. It's the same stuff with 3PAR, (different packaging in HDSs case) but under the covers there have NOT been any meaningful changes.

Page:

This topic is closed for new posts.

Other stories you might like