back to article NetApp gives its FAS range a 4 MILLION IOPS dose of spit'n'polish

As we foretold in May, NetApp has completed the revamp of its unified storage FAS arrays with FAS2500s at the low-end and a monster FAS8080 EX at the top. We got the basic details, except for the FAS 8080 which has 36TB of (Virtual Storage Tier) flash and not the 18TB we thought. Apart from that it has up to 5.76PB of capacity …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    Software

    Sure - NetApp has improved hardware and it was needed. The software needs to catch up. They are still running traditional RAID (okay it has double parity) but as drives get larger, rebuilds are going to drag on for over a day on these. Put in a 6TB SATA drive here, have it fail response times will be skewed for a while.

    1. TheVogon

      Re: Software

      And you would replace RAID DP with ?

      1. M. B.

        Re: Software

        The performance of the RAID-DP implementation is literally one of my favorite things about NetApp.

        1. Anonymous Coward
          Anonymous Coward

          Re: Software

          Raid DP can perform very well, but compared to what ?

          If you're sizing Raid DP for performance you typically have to factor in additional spindles vs other implementations. I'm fairly sure that if you over sized in the same manner for other kit you'd probably see similar results on Raid 6, at least for a decent implementation of Raid 6.

          But lets face it Raid-DP is all Netapp have to offer (that's not the case for others) and so it has to be made to appear to fit every use case. Before anyone starts the "you don't understand WAFL" argument, yes I do and pretty much every decent system on the market today offers larger write caches, write coalescing and full stripe writes, Netapp really only really holds an advantage for dual parity configurations on a new or underutilized system.

          This need to oversize for performance on RAID-DP is why Netapp have always been keen to get the other Vendors to quote Raid 6 (it was never really about availability) as it levels the playing field somewhat. No problems with Netapp Raid-DP or any other dual parity implementations, but lets not prescribe miracles to them and at some point very soon even dual parity won't cut it.

          1. Anonymous Coward
            Anonymous Coward

            Re: Software

            Worth mentioning and what many people overlook is that Netapp absolutely needed Raid-DP because their old Raid4 implementation just couldn't provide safe scalability for their aggregate model. Other vendors didn't need this as they either stuck with traditional raid sets or used these to back disk pools and more recently employed micro raid to overcome the scalability issues. As such Raid DP was never really a choice, it was always pretty much a mandatory requirement.

            1. SPGoetze

              Re: Software

              Hmmm, seing that RAID-DP was introduced with ONTAP 6.5 a good 10 years ago and only became the default in ONTAP 7.3 (2009) I can't quite follow your reasoning.

              I DO follow the reasoning, that being protected while rebuilding a disk (something that RAID 4/5/10 doesn't provide) with only <2% performance penalty (which you probably won't notice) is something you should do by default.

              Scalability, the way I understand it, was provided with the advent of 64-Bit Aggregates in ONTAP 8.x, especially 8.1+.

              1. Anonymous Coward
                Anonymous Coward

                Nice spin, but they needed DP to allow aggregates and flexvols to scale safely across more disks nit more capacity. With Raid4 and traditional spindle based raid, every time you double the number of disks you also double the likelihood off a dual drive failure. So instead of revamping the Raid implementation they simply added another parity drive.

          2. SPGoetze

            Re: Software

            Well, *Data ONTAP* offers only RAID-DP (and RAID 4, and RAID 0 (V-Series), and all of them mirrored, if you'd like). The 2% performance penalty (vs. buffered RAID-4) should be less than one additional disk...

            If you want performance with less Data Management Overhead, compare with the NetApp E-Series. It offers a variety of RAID schemes, plus 'Dynamic Disk Pools' (RAID-6 8+2 disk slices) which dramatically reduce rebuild time and impact.

            Or get a hybrid ONTAP config (FlashCache / FlashPool). I haven't seen a too-many-spindles for performance NetApp config in a long time...

            1. This post has been deleted by its author

            2. Anonymous Coward
              Anonymous Coward

              Re: Software

              Raid 0 - because with vSeries you rely on someone else to handle the backend raid calculations, if this were truly Raid 0 no one would touch it, so lets; no pretend it's a choice.

              Raid 4 - Single parity with a dedicated parity drive, oh my would Netapp even recommend this after all of those years banging on about dual parity. Even so it's not scalable for large aggregates, so pretty much never used these days, again not really a choice.

              All of them mirrored - Is a bit of misdirection, what you mean is replicated to another array so lets not pretend this is something to do with Raid, it's another bit of software.

              Again more relative marketing statements - 2% performance hit vs what ? Netapp's Raid 4 implementation or other vendors Raid 5 or Raid 1 ? and where is that hit being taken, backend disk or at the CPU with additional parity calculations or both ?

              "If you want performance with less Data Management Overhead, compare with the NetApp E-Series"

              What ? this article is all about FAS & Ontaps new go faster stripes and its ability to compete with All Flash Arrays. Yet your telling us that a traditional dual controller block storage box (E-Series/ex LSI), that isn't even a particularly fast flash platform still outperforms FAS ?

      2. Anonymous Coward
        Anonymous Coward

        Re: Software

        Reed Solomon codes -- http://www.cs.cmu.edu/~guyb/realworld/reedsolomon/reed_solomon_codes.html

    2. LarryF

      Re: Software

      Agree with TheVogon - what would you replace RAID-DP with? Data Dispersal and Erasure Coding have their problems too. Nice thing about NetApp is that as soon as a drive hiccups, its data is automatically copied to a spare drive - avoiding any RAID rebuild time at all. Most drives die slowly, not suddenly. If your software is smart enough to detect this, you can use it to your advantage.

      Larry@NetApp

      1. Anonymous Coward
        Anonymous Coward

        Re: Software

        No need for erasure coding just yet, but they need to move away from the clunky old spindle based raid, even if it's dual parity it's still a decade or more out of date and completely inflexible for today's workloads.

        Some form of micro raid that doesn't tie aggregates or spares to specific physical disks and can track and rebuild only used blocks, like 3PAR, XIV, Compellent and even EVA to some extent years before. That way data mobility becomes simple and if you do it right you can reduce rebuild times and also increase availability (at least statistically) over traditional raid.

        This stuff is under the covers and so not typically considered as part of a product selection criteria, but having data structures pinned to physical disk constrains the flexibility of pretty much every higher level feature the vendors then layer on top.

      2. Anonymous Coward
        Anonymous Coward

        Re: Software

        Or do what Nimble Storage does... smarter rebuilds.

        Or do what Panasas does... file level RAID.

        Or do what HDFS does

        Or do what GFS does

        Larry@NetApp -- come on. Don't sell us old ideas in a new way.

    3. This post has been deleted by its author

  2. Anonymous Coward
    Anonymous Coward

    Why would anyone go with cash burning startup

    It seems from a performance or feature standpoint. There doesnt seem to be that much reason to go with a startup vs an incumbent vendor. It looks like price wise they are all converging as well.

    comments ?

    1. TheVogon

      Re: Why would anyone go with cash burning startup

      "There doesnt seem to be that much reason to go with a startup vs an incumbent vendor"

      Proven reliability and feature set spring to mind. If money was no object and I needed enterprise class storage then I would almost always buy EMC V-Max because it is hands down the most capable, powerful and flexible product. There are niche capabilities that you sometimes need other products for but V-Max does the most in one package imo. However in the real world you need to consider what you are getting for your money too, and then other vendors come into the picture.

    2. Anonymous Coward
      Anonymous Coward

      Re: Why would anyone go with cash burning startup

      Well it depends how many systems and how much it costs to get those 4 million IOps. Also are they real numbers or marketing numbers derived from a 100% cache hit rate ?

    3. Anonymous Coward
      Anonymous Coward

      Re: Why would anyone go with cash burning startup

      Depends on the incumbent vendor and it's capabilities. Consistent low latency, very high IOps and data compaction technologies seems to be the startups real selling points, but they lack many enterprise features which only time can hone.

      Conversely most of the traditional incumbent vendors are hampered by none flash optimized architectures, which although they may be able to achieve monster IOps have trouble guaranteeing consistent latency and the levels of capacity efficiency through inbuilt compaction.

      For all the 4 million IOPs touted here, that will be a very specific and very expensive configuration unlikely to be used anywhere outside of a test lab and the IOps numbers alone tells us nothing about the test criteria or kit required. They may as well have said 5 or 6 million because no one can really question those numbers and they aren't really required to provide proof.

      1. SPGoetze

        Re: 4 Mio IOPS probably not too far off the mark...

        I'm fairly sure they arrived at the 4 Mio IOPS number by taking the old 24*6240 SpecNFS results (https://www.spec.org/sfs2008/results/res2011q4/sfs2008-20111003-00198.html) and factoring in the increase in controller performance.

        The old result had 3 shelves of SAS disks per controller, so there's no unrealistic expensive RAID 10 SSD config required, like with other vendors results. Also 23 of 24 accesses were 'indirect', meaning that they had to go through the 'cluster interconnect'. pNFS would have improved the results quite a bit, I'm sure.

        The old series of benchmarks (4-node, 8-node, .. 20-node, 24-node) also showed linear scaling, so unless you'd saturate the cluster interconnect - which you can calculate easily - the 4 Mio IOPS number should be realistic for a fairly small sized (per controller) configuration. Real life configs (e.g. SuperMUC https://www.lrz.de/services/compute/supermuc/systemdescription/ ) will probably always use less nodes and more spindles per node.

        1. Anonymous Coward
          Anonymous Coward

          specNFS doesn't measure IOps, so the number is meaningless.

        2. Anonymous Coward
          Anonymous Coward

          Re: 4 Mio IOPS probably not too far off the mark...

          So what you're saying is that you think they made the number up ?

  3. This post has been deleted by its author

  4. Anonymous Coward
    Anonymous Coward

    These figures may be impressive but NetApp has become so ridiculously complex, especially cDOT and the mind-blowingly restrictive compatibility matrix that there are simpler and more cost effective solutions available.

    NetApp is just no longer unique and in many cases prohibitively expensive.

  5. Anonymous Coward
    Anonymous Coward

    No mention of IBM in any of this ?

    Another El-Reg article

    http://www.theregister.co.uk/2014/06/13/gartners_allflash_array_report_has_ibm_as_number_one/

    says IBM are the leading flash vendor. Yet no mention in the article or the comments. Are XIV and V7000 seen as last generation ?

  6. HCL

    Head HPC Bussiness Development HCL, India

    Seems to be a very promising product. Particularly in HPC arena if they do away with the need for storage nodes, by interfacing this directly in the compute cluster over multiple IB ports, this product will be a killer.

    1. Meanbean

      Re: Head HPC Bussiness Development HCL, India

      For HPC and ultra low latency high performance check out E-series which has IB

  7. cBells

    How much would it cost to fill an 8000 series with all SSD Flash drives? Sounds awesome yet outrageously expensive...if only they could offer inline deduplication to reduce the amount of SSDs required to lower the cost per gb. Where does XIV and XtremIO sit compared to this offering?

    1. LarryF

      Post processing dedupe has its advantages

      Post-process dedupe has a lot going for it, number one being that it doesn't get in the way of a CPU thats busy with server I/O requests. Number two - since you have more time to process hashes you can add data integrity checks to avoid hash collisions all together. And...if you schedule a dedupe run each night, the capacity overhead required is surprisingly small.

      Larry@ NetApp

      1. Anonymous Coward
        Anonymous Coward

        Re: Post processing dedupe has its advantages

        More mumbo jumbo to make Netapp's post process dedupe technology seem relevant.

        The all flash arrays use inline dedupe today and despite the above claims still manage to outperform an equivalent FAS configuration. The data integrity comment is also a ruse as they all either provide a byte compare on match to avoid collisions altogether or use multilayer hashing to check and correct data integrity issues after ingest, some do both and all at much faster speeds and lower latencies than Netapp FAS can provide.

        With post process If you have 50TB's of data to ingest you need 50+TB's to land it in (including metadata) so what did you save upfront ? Also if you ever need to recover your data into a post process dedupe system, you're likely going to have to stagger that recovery between dedupe runs to ensure you have enough space available to accommodate the data.

        Dedupe is good to have in any system, but post process is the least preferable in the world of flash, just ask Netapp when they eventually launch FlashRay :-)

      2. AndrewDH

        Re: Post processing dedupe has its advantages

        If you can't handle inline de-duplication without impacting performance then clearly as you have stated post-process de-dupe would be preferable.

        However if you can handle inline de-dupe without impacting performance and a number of vendors but not NetApp can then post process de-dupe is a poor idea.

        Of course if you are de-duping anything on spinning disk based storage then you are also potentially randomizing the storage that you DBMS etc has just tried to write out sequentially making your performance much worse. Not a problem with Flash, but more of an issue with Hybrid flash or Disk based devices.

      3. Anonymous Coward
        Anonymous Coward

        Re: Post processing dedupe has its advantages

        Good lord man! Are you still in the 2000s? Flash enables in-line dedupe for so many companies. Just because Netapps engineering can't change cost to accommodate doesn't give you the excuse to use CPU requests as an excuse. Sometime it is best to just dump the old code and avoid the taxes that comes with it. EMC did that with Clariion/VNX and now is doing it again with XtremeIO. HP did that with 3PAR (vs. EVA). How about some ONTAP for the 2010's action?

    2. Anonymous Coward
      Anonymous Coward

      XIV is in the same boat as Netapp in that they're not really flash optimized solutions, but both can quite happily take advantage of flash technology.

      In XIV's case it's has limited scale and hence performance and supports only a read only cache today, it's claim to fame is simplified management which seems to spring from the fact you have extremely limited choice in how things are configured.

      Netapp has much higher scalability, flexibility and many more features but it's active/passive architecture is getting very long in the tooth and cluster mode only really masks the problem rather than solving it.

      XtremeIO is a wait and see technology for me, it has very limited features, no online upgrades, rush to market features such as UPS backup for DRAM cache etc and at present it looks like some brave souls are paying EMC for the privilege of beta testing the product.

      The more mature players such as Pure,Solifire etc seem to be in better overall shape in terms of flash optimized and product maturity, but they again lack enterprise features. If I needed AFA performance but wanted enterprise features then IMHO the only vendor who can really supply that without middleware in the form of additional software or appliances is HP with their 3PAR 7450 array.

    3. StorageEngineer

      At least what I know that with this upgrade, NetApp is offering hybrid flash arrays at same price as before upgrade. Same should be true for all SSD and note that NetApp did sell all flash before. It seems that they woke up now to optimize and sell it in the market.

      Yes, you are right inline deduplication is missing peice with NetApp FAS arrays. But they do offer offline dedupe which is better from performance perspective (compared to inline) but delayed. They better come up with inline dedupe.

      Otherwise data management feature perspective, they are king. EMC is also same boat with their high end arrays. If these two companies come up with decent IOPs (say top 10) on their traditional systems with all flash (not xtremeIO/EF series), it will be unmatched other than price. Pure is desperately trying to match the data management capabilities but it won't happen overnight. It takes time to evolve.

      1. Anonymous Coward
        Anonymous Coward

        "But they do offer offline dedupe which is better from performance perspective (compared to inline) "

        No the all flash boys doing inline today completely outperform FAS and VNX2 with or without their post process dedupe so you can't claim that it impacts performance just because they can't do it..

        " these two companies come up with decent IOPs (say top 10) on their traditional systems with all flash (not xtremeIO/EF series), it will be unmatched other than price."

        But that's the problem, their architectures are underpinned by a legacy software stack which means they just can't just throw flash at the problem. This is made worse by the fact they're essentially active passive solutions with no ability to scale out and trying to retrofit scale out to an existing architecture whilst maintaining low latency is a fools errand. This is precisely why EMC had to buy XtremeIO and Netapp are developing FlashRay !

        Netapp are taking a leaf out of EMC's play book and hyping the fact they can now scale across cores, It doesn't solve the problem in any way, but it buys them some time until they have something more competitive. As for Pure et al I'm not really sure what remains of many of the startups value props now that HP have introduced iline dedupe to their 3PAR all flash array, it does all of the above and then some and also has killer data services.

  8. MRensh3

    AFA is a loose term nowadays...

    Legacy SAN's filled with SSD don't make it an AFA, as EMC understood when they tried selling their first "AFA" with the VNX-F (VNX with just SSD's). They at least smartened up and bought XtremIO, and are now destroying the likes of Pure and other "AFA" startups with architecture truly built to support all SSD. When is NTAP going to come out with a real AFA?

    1. Anonymous Coward
      Anonymous Coward

      Re: AFA is a loose term nowadays...

      Tsk Tsk EMC are destroying no one with what amounts to a public beta of XtremeIO. The couple I've seen in the wild were only really taken on board because they were effectively sweeteners for a bigger deal.

      1. MRensh3

        Re: AFA is a loose term nowadays...

        A Public Beta? They bought XtremIO 2 years ago and have been pouring R&D into it ever since. After the first month of GA it became the #1 AFA in regards to revenue achieved. Sure they are still waiting to integrate compression and replication into the OS but they'll be there soon. Even without that, the architecture they've built this product on (Scale-out vs Scale-up) makes this an easy choice for any company looking into the future vs NTAP, XIV, Pure, etc.

        1. Anonymous Coward
          Anonymous Coward

          Re: AFA is a loose term nowadays...

          Just repeating the marketing hype won't make it so, the revenue numbers at this point are meaningless as the market wan't fully defined when EMC made these statements.

          http://www.thegurleyman.com/field-review-xtremio-gen2/

          Public Beta ?

          XtremIO sharing everything can mean more than just the good stuff. In April, ours “shared” a panic over the InfiniBand connection........ but it was production-down for us"

          Scale out ?

          "True to Justin’s review, XtremIO practically scales up. Anything else is disruptive"

    2. Nick Triantos

      Mental Investment Required

      Don't compare the SW architecture of a VNX to ONTAP because If there was a storage police, you'd be in jail by now.

      However, everybody needs to make a mental investment beyond the 140 tweeter chars and the soc media noise and I'm willing to help with that. Here's a good starting point in order to understand a few things about "real" AFAs vs ONTAP plus a few other things i'm quite sure you had no idea.

      http://storageviews.com/

      Disclosure: NetApp Employee

      1. Anonymous Coward
        Anonymous Coward

        Re: Mental Investment Required

        Hehe that's just like the VNX2 launch.

        Yay finally after a decade we've managed to optimize for multicore CPU's, even though we've been telling you for years that we go twice as fast with each product release.

        This marketing mumbo jumbo is nothing more than an effort to stay relevant in an industry moving to flash and for all Netapps great software, their FAS architecture can no longer cut it. If it could they wouldn't push E-Series for performance nor would they be developing FlashRay.

      2. mtuber

        Re: Mental Investment Required

        The internet produces more fools and trolls by the minute than your ability to educate them.

        It was a good post man and those of us who care found it very enlightening.

        1. Anonymous Coward
          Anonymous Coward

          Re: Mental Investment Required

          No it was an effort to make it appear Netapp had provided some unique innovation to optimize these products for flash. Whereas the reality is they're using the same multicore spin EMC did with the VNX2 release to make their products appear relevant in this market until they have something better to offer.

          1. mtuber

            Re: Mental Investment Required

            You choose intentionally to ignore how ontap writes which is fundamentally no different than how afas do it. So while they made the changes for multicore, the underline fundamentals are the same as an afa which is NOT how the VNX does it. so what he is really saying is "if i write the same as an afa, and i have multicore support, why would buy an afa?"

            I guess HDS, and HP are also "stupid" for modifying their code...

            1. Anonymous Coward
              Anonymous Coward

              Re: Mental Investment Required

              Dude.....,Yes I did intentionally ignored that because it's marketing BS, so WAFL's write layout and new multicore scaling combined now qualify the FAS platform as an AFA system ? Meh If there were really more to that statement than marketing to assist with the relaunch of more of the same old, then why on earth would Netapp need to go off and build FlashRay from scratch ? Besides I'm not comparing it to VNX which for all intents and purposes has hit the same dead end as FAS, that's why EMC are plugging XtremeIO for all they're worth.

              A few years ago Netapp marketing were harping on about how WAFL's write layout was also ideal for flash wear levelling and how they could handle the process much better than rivals. They completely ignored the fact this functionality was built into the drives bios and so completely transparent anyway. It didn't matter where WAFL put the write because the drive would relocate it anyway :-) Same for post process dedupe, hey it's great, except all that shuffling of blocks after the fact causes write amplification (wonder how they handle the reallocate command on SSD) and additional wear to the drives vs inline which does neither and which Netapp don't have.

              More marketeering but if you choose to be duped so be it.

            2. Anonymous Coward
              Anonymous Coward

              Re: Mental Investment Required

              HP & HDS have actually done some clever things around flash integration but their architectures are vastly more integrated than either VNX or FAS so as a whole their systems have much more scope for the future. Close coupled scale out architectures, multi controller and multi processor scaling for many years, symmetric active active processing on controllers etc as well as all the other flash enhancements. In the case of HP they even have a dedicated product with specific internal tweaks for all flash, which also now supports inline dedupe as well as all the other inline compaction tech they already had. Theses very real embedded flash enhancements now allow them to offer a unconditional 5 year warranty even on cMLC drives.

              1. mtuber

                Re: Mental Investment Required

                Like what? Name them please. What are the "clever" things that they have done? netapp also offers unconditional 5 year support and if netapp were to offer cMLC drive support would that change your stance..? Be prepared to insert your foot in your mouth with regards to cMLC support on FAS...

                1. Anonymous Coward
                  Anonymous Coward

                  So Netapp offer a 5 year unlimited warranty for eMLC & cMLC regardless of whether the drive fails or exceeds it's overwrite limit ? then show me that unconditional statement.

  9. Anonymous Coward
    Anonymous Coward

    BTW I never claimed Netapp couldn't use cMLC, that appears to be your own paranoia kicking in. Since you asked for some examples read this http://www.techopsguys.com/tag/3par/

    BTW those are just a few of the more overt features, there are also.lots that.just don't get called out because they're just part of the architecture. e.g Symm AA, multicore and multinode scaling of worloads, wide striping, distributed virtual spares, ASIC based thinning etc. I can provide you HDS info as well if you'd like.

    1. mtuber

      I think you have no idea what you're claiming because you keep changing the conversation....dude

      and btw...how's ASIC use and wide stripping different with SSDs than with HDDs? It's not. It's the same stuff with 3PAR, (different packaging in HDSs case) but under the covers there have NOT been any meaningful changes.

      1. Anonymous Coward
        Anonymous Coward

        The point being wide striping is good no matter the media. For disk you get load balancing and high performance and for SSD you get both plus even wear across all those expensive SSD's. Are you trying to suggest wide striping is not a good idea ? Is that maybe because your favourite vendor doesn't have that capability ?

        BTW before claiming there are no meaningful changes at least take the time toread the post I linked to

        http://www.techopsguys.com/tag/3par/

        Now I've answered all of your questions and provided links where appropriate and you continue to beat about the bush, were is that 5 year unlimited warranty you promised ? What happened to the foot in my mouth, and what exactly is this release other than a rebranding exercise.

      2. Anonymous Coward
        Anonymous Coward

        Well I linked too what you asked for dude, also gave you an opportunity to show me the 5 year unlimited warranty. But I think you just realized your out of arguments.

        1. mtuber

          NetApp offers a 5 yr unconditional warranty regardless of the failure nor to they force fail their drives at 95% like HP does and claim victory!

          Do a search on google and you'll find it

          here's a start...

          http://www.tympaniinc.com/advanced-technology-blog/PostId/98/netapp-closes-the-gap-with-all-flash-fas

  10. mtuber

    Is that it? Writing in 4k pages instead of 16k and flushing 4k? How about write coalescing? In-place updates? 3par has made a couple of modifications and is considered an all flash array yet ontap which has done all these and much more is not? what planet do you live on?

  11. mtuber

    And one last thing...Does the HP unconditional warranty cover all of SSDs qualified for the 3PAR platform (100,200,400,480,920,1.9) or 3 specific models..?

    If the optimizations are so grand and effective in write leveling then why not apply the 5yr warranty to all models? Why even force fail to begin with? Even more so, with inline dedupe there will be fewer writes and since the super dupper wide stripping is so effective in evenly utilizing SSDs per HP claims why not do it? I assume the inline dedupe is also available with hybrid configs too since the ASIC offloads everything. Right? and all that with a default R6 config right?

    Good luck

    1. Anonymous Coward
      Anonymous Coward

      Oh dear your getting bent out of shape over our little discussion.

      See you didn't comment on the wide striping, the 95% force fall is essentially for non deduped SSD and there are multiple alerts way before customers reach this level.in over 5 years of selling SSD they have yet to see an SSD reach this level. But would you rather they just ignored that this threshold had exceeded and allow customers to inadvertently place their data at risk ?

      Write coallescing has always been part of the platform. But since you've completely.misrepresented the information in the link I provided here goes. Inline ASIC strips the zero's at wire speed, inline dedupe does the same for the ones. Adaptive sparing releases reserved space on the SSD providing lower cost per GB with no detriment to reliability, adaptive read and write SSD aware reads and writes with modified flusher that actually improves upon existing disk based write coalescing, write cycle monitoring etc etc etc.

      BTW your 5 year warranty link states the following

      If a reliability issue is encountered within the five-year warranty period, NetApp will replace the drive...

      That suggests they'll replace the drive if it has a defect, not of it's exceed it's write cycle. The two are very different things and HP offer both for all current SSD drives.

      1. R8it

        (m)tuber's head is buried in the ground

        mtuber's NetApp defense sounds a little desperate to me. There are a group of optimized all-flash arrays that include Pure, HP 3PAR 7450, Solidfire, and EMC XtremeIO. And then there are a group of platforms that are clearly not, NetApp FAS and EMC VNX/VMAX being obvious examples. Why is NetApp investing in FlashRay if FAS meets the need?

        mtuber, your arguments are simply not compelling. AC won hands down.

        1. Anonymous Coward
          Anonymous Coward

          Re: (m)tuber's head is buried in the ground

          Not sure how you can say mtuber lost the argument. NetApp does indeed have a 5-year unconditional SSD warranty regardless of wear. Additionally, WAFL by happenstance, even though it was written 20 years ago, is optimized for SSD’s as it is kinder to SSD’s because it coalesces writes (like the new startups products do by the way) and then by nature distributes those writes optimally (WAFL =Write Anywhere File Layout) across bigger areas of the SSD drives which in turn increase write and read performance as well as create less wear on the drive. Lastly FAS with Flash is unified SAN and NAS with dedupe and compression across all protocols as well as having built in application aware snapshots and archive and can be combined and replicate to non Flash NetApp storage. What other singular (not Frankenstein) bolted together collections of disparate products with NAS gateways etc can do that?

          In regards to the FlashRay, it sounds like it will be something above and beyond what the startups like Pure and products like HP 3PAR 7450, Solidfire, and EMC XtremeIO etc.are doing, not just a me too product.

          Where is the evidence that mtuber’s justification can be categorized as an argument loss?

          1. This post has been deleted by its author

          2. Anonymous Coward
            Anonymous Coward

            look he lost and your just prolonging the agony. If Netapp do indeed provide an unconditional 5 year warranty, regardless of overwrites (DWPD) for eMLC & cMLC then just post the link and we can all move on. Be good to know what the price per GB is also compared to HP's $2 per GB (linked to a source of course).

            But let's be honest regardless of the improvements in WAFL/ONTAP etc it just doesn't have the right architecture to take full advantage of flash If it could they wouldn't need FlashTony , If yoi can't see that then you need to wake up and take a look outside the Netapp bubble.

            As for the net new flash platform they're developing "it sounds like" yeah everyone sounds great on PowerPoint but since there's a complete absence of actual.product this statement is pointless at best

            1. This post has been deleted by its author

            2. Anonymous Coward
              Anonymous Coward

              Regardless of all.the above SSD optimized wafl, can Netapp safely turn my 800GB drive into a 920GB or my 1.6TB into a 1.9TB and then offer inline deduplication ? Because if they can't then they can't lower my cost per usable GB. They've also never been known for stellar performance (see their CTO's comments below) outside of a NAS only environment and I've seen first hand the hit their compression has. So why would I even consider them for an all flash workload (is NAS even a use case for all flash ?) when something like 3PAR can give me all that and much more.

              1. Anonymous Coward
                Anonymous Coward

                re: Regardless of all.the above SSD optimized wafl

                No you are correct, when NetApp uses SSD's for capacity, the deduplication won't give you 920GB on an 800GB drive, it will give you much more to the tune of 1.2TB+ on an 800GB drive and NetApp Guarantees 50% or more efficiency with VMWare. When NetApp uses SSD's for FlashPool cache it does the same since the data is already deduped and we can put 1.5X+ more into the cache than the amount of cache you purchased, oh and it does this for all protocols (NFS, CIFs, FC, iSCSI, FCOE and SMB) simultaneously. So it actually much more efficient whatever box you are talking about that turns 800GB into 920GB and can do it for all protocols.

                Regarding not being known for stellar (outside of NAS) performance, why does NetApp have the #1 Oracle SLOB benchmark utilizing FlashPools with Fibre Channel then? - Also, thanks for taking Jay Kidd's statements out of context again also. He said “EF540/550 can run and use flash better than ONTAP." He didn’t say OnTap is crap for flash, he said that our other products can do better. EF540/550 are simple low featured block only (Like 3PAR) boxes with no overhead because most things are done in an ASIC, of course the EF540/550 would perform better. That doesn’t mean that OnTap performs poorly. If OnTap was so bad it wouldn’t be the #1 storage operating system in the world.

                Is NAS even a use case for Flash? Duh, Virtual Desktop for one. Try running 10,000 VDI instances on traditional disks without it being either on Flash or being cached by it. It significantly helps VMWare as well, especially when you are running OLTP applications in a VM. According to IDC, NAS is the direction of the storage market, while block is in decline, NFS, CIFs and SMB are on the increase. If you purchase a product today which does not have NAS options, you are painting yourself into the corner for upcoming versions of VMWare, SQL, Exchange and other applications.

                Review the 3PAR spec sheet again, it cannot do anything near what the NetApp can do. Block only, unless you purchase a bolt on gateway, no duplication except for the 7450 unless you purchase the bolt on gateway which only does dedupe for NAS, not block and can only scale to 4 nodes vs NetApp/s 8 or 24 depending on how you implement. No integrated VTL capability either, HP has to bolt on yet another product for that. HP has to bolt on a bunch of products that were purchased from, and developed by, completely separate companies to do anything close to what NetApp FAS has done for years. If HP is so good at storage why is everything they sell either an acquisition (3PAR, LeftHand, IBRIX) or an OEM (MSA/EVA/Windows Gateway). I cant even remember the last major HP storage product that they developed in house, not sure if there ever was one.

                While you’re at it, review the converged infrastructure validated solutions market share numbers which is where the market is heading. NetApp and Cisco with FlexPod are #1 with 75% installed share, vBlock an VCE are #2 with 25% installed share and HP is lumped in with others or when shown separately have 2% share depending on which chart you look at. You would think a company like HP who has their own server and storage business would do better than the combination of two completely separate (NetApp and Cisco) companies, the IDC numbers tell the tale though.

                When HP passes NetApp in shared storage market share and converged infrastructure market share, maybe they can stop throwing out FUD and start throwing out facts.

  12. Anonymous Coward
    Anonymous Coward

    http://www.theregister.co.uk/2014/06/23/how_much_disruptive_innovation_does_your_flash_storage_rig_really_need/?page=2

    "Jay Kid, NetApp's chief technology officer, thinks that all-flash arrays outside of legacy storage arrays have their place. He believes that extending the capabilities of NetApp's ONTAP storage controller software would be a stretch too far to cover flash, as its primary focus is disk.The company's coming all-flash FlashRay and existing EF540/550 systems, neither of which run ONTAP, can run and use flash better than ONTAP."

  13. Anonymous Coward
    Anonymous Coward

    So lots of hand waving and misdirection there to bring the conversation back to Netapp's traditional sweet spots of spinning disk on NAS. You've really swallowed the KoolAid there, the sheer desperation of your arguments are quite amusing even if they're completely irrelevant to the discussion at hand. Personaly I would shy away.from the performance discussion otherwise we'll be forced to review your SPC1 results, rather than the hand picked niche benchmark you offered.

    I wouldn't even attempt to compare 3PAR data services with what is available on the E-Series which isn't even in the same class. Oh and if you haven't noticed Netapp offer multiple platforms and they're adding more, they also went all out to acquire DataDomain for backup so they must know something isn't quite right with their current strategy for block, flash.and backup.

    BTW I didn't take your CFO's comment out of context follow the link it's obvious your marketing and engineering divisions have differing opinions (again).

    Now where is that 5 year eMLC & cMLC warranty statement ?

    1. Anonymous Coward
      Anonymous Coward

      RE: So lots of hand waving and misdirection

      Again, NAS is the future of shared storage, especially because it facilitates cloud and block is in decline, this isn’t NetApp KoolAid, it is IDC reported fact. Still, over 50% of NetApp FAS OnTap storage and 100% of E-Series Santricity storage is deployed as block, so NetApp has more market share in NAS than all other storage companies and more block market share than HP and every other storage company besides EMC. Regarding the benchmark, I picked a real world benchmark. The pie in the sky, money is no object SPC1 benchmark looks good in the marketing docs, but no customer anywhere would every purchase or could afford those configurations. The SLOB benchmark was specifically created to show real world capabilities with configurations that customers would actually run on a real world product like Oracle.

      Regarding NetApp’s multiple platforms, there are only two OnTap and Santricity and you can put Santricity disk and controllers behind OnTap, so they can be used together and you have investment protection. Can’t say that for any other brand of storage out there. (Try to put LeftHand, MSA or EVA behind a 3PAR)

      Regarding Data Domain. I would agree that NetApp does not have a good solution for backing up things that are not on the NetApp SAN. That doesn’t mean that the SAN based backup is bad any more than Jay Kidd saying OnTap isn’t as good with Flash as Santricity means that OnTap is bad for flash.

      Lastly there is no separate 5 year eMLC & cMLC warranty statement because there is no fine print. If there were, the fine print would be posted instead. The 5 year eMLC & cMLC is the same as any other disk NetApp sells.

      Back to the future though. When HP passes NetApp in NAS market share or SAN market share, or integrated infrastructure market share, they can talk some more smack. Until then one would think that a company that just had 5 years of dramatically declining market share going from #2 to #4 and being tied with Dell for 4th place would keep quiet about ripping on other company’s products. NetApp, by the way, is now in the #2 position where HP was 5 years ago.

      The real desperation is in HP trying to convince everyone that they are as big as they used to be, bigger than they really are, and are taking share from everyone by re branding products people stopped buying 5 years ago.

      1. This post has been deleted by its author

      2. Anonymous Coward
        Anonymous Coward

        Re: RE: So lots of hand waving and misdirection

        I'm still unsure how any of this is relevant to the conversation at hand (all Flash Arrays). NAS is indeed growing but it's not the general purpose NAS you have onsite and it's not the NAS you run you business critical systems on (if you ever did). It''s the external cloud.based storage which is a commodity play, stack it high and sell it cheap and Netapp don't play there.

        Coming back to file vs block, the reality is regardless of front end protocol it all ultimately comes from a block based device, yes even that used by Ontap you just front end the block with a file server and that's why you can make SANtricity look like ONtap, as HP or any other vendor can also do for any of their own storage. Now try doing the other way around SANtricity serving out Ontap and you have a problem, but that's exactly what you suggesting others should do to make it seem you have an advantage.

        As for OEM relationships well IBM just kicked the Ontap one into touch and the E-Series formerly the LSI engenio product range had most of those relationships in play before Netapp made the acquisition. In fact Netapp bought that business specifically for the OEM market, no doubt suspecting at some point the IBM relationship would come to a close. BTW now IBM have the low end V-Series I can see the remainder of the OEM relationship going west pretty soon also, so you can scratch another one.

        Netapp's problem was it wanted to catch EMC and EMC had stopped running a long time ago, they just made incremental improvements and farmed their Customer base. Netapp did catch up but then assumed they'd done enough (hurrah) and so they made the mistake of sitting on their laurels and failing to innovate (just ask some of their ex staff and customers and this one is the bigee). Now the market and EMC is moving again XtremeIO, Isilon etc Netapp are currently clinging on for dear life, making the right noises in the hope no one will notice this before they can get something competitive out the door.

        Now regardless of all of the above this is a conversation about ALL FLASH ARRAYs and the purpose of a AFA is to provide predictable low latency across multiple workloads concurrently with the purpose of providing high transactional throughput and / or a great user experience. Now adding the additional overhead of File based protocols or yet another block translation layer just increases latency (Fact) and provides pretty much zero benefit in these environments unless you have a poor block implementation in the first place. So why even bother if you're design or reliance on file based protocols just defeats the object of providing predictable low latency. For hybrid configurations I would agree file has a place, but that isn't what this conversation was about, so you arguments and product positioning attempts are invalid.

        1. Anonymous Coward
          Anonymous Coward

          I'm still unsure how any of this is relevant to the conversation at hand (all Flash Arrays).

          Are All Flash Arrays really the conversation at hand? You keep changing the conversation each time you hit wall of facts that prove your FUD is untrue. First you say that RAID DP is bad for bigger drive rebuilds (not true), WAFL is bad for flash (not true), FAS is bad for flash (not true), NetApp doesn't do block well (not true), then say NetApp's warranty isn't unconditional on SSD's (not true). What really is the topic other than your jumping all over the place in the conversation trying to discredit NetApp just to be proven incorrect by real facts time and time again? I am done responding to your comments as they are all BS intended to misinform.

          Last facts to close this out as I am not responding to further nonsense.

          1. Regarding NAS, yes mission critical systems run on NFS, CIFs and will be running on SMB in the near future. Is your VMWare not mission critical?, Is your file serving not mission critical, is Exchange and SQL on SMB 3.0 not mission critical. Block is going away, IDC number show that, Cloud is pushing that, even for critical platforms and nobody is better position for this NAS future than NetApp. And even so NetApp is #2 in block market share in case NAS is not the future as you say, NetApp still holds a leadership position in Block based storage.

          2. Regarding your IBM comment, IBM Still OEM's e-series and has sold $3B of it in the past 10 years, it is historically their best selling storage product, so that's not going away any time soon as they have no product that can replace it. They only sold FAS when someone wanted NAS, so they they didn't sell that much of it to begin with, not a big deal that they are ending that relationship.

          3. Regarding Cloud: Over 175 cloud providers run on NetApp, and provide private NetApp storage for mission critical customers, so yes, NetApp does play there and yes NAS plays right into that. You are correct, the cheap cloud storage where people put data nobody cares about probably doesn't go on NetApp, but the mission critical applications large corporations and governments are moving to the cloud does.

          4. Lastly if the conversation is about All Flash Arrays, NetApp has all Flash FAS and All Flash E-Seires today, both of which are capable or providing performance in excess of any customers requirements and adding additional value that other vendors products do not. Add to that the upcoming FlashRay which combines the best of both and will extend NetApp's leadership even further.

          At the end of the day, it is how much product you sell and ship that shows who the leaders are, not how much FUD and BS you can spew. NetApp is the #2 storage brand in the world with the #1 Storage operating system, the #1 provider of storage to the US Federal Government and ships more effective usable capacity than any other vender. All the FUD and BS in the world won't change those FACTS.

          1. Anonymous Coward
            Anonymous Coward

            Re: I'm still unsure how any of this is relevant to the conversation at hand (all Flash Arrays).

            Well yes all flash arrays were the conversation at hand but as you've proved time and again that's not the conversation you want to have, since you then have to tacitly admit Ontap isn't up to the task.

            If it were then the EF550 (which is being used as a stopgap) simply wouldn't exist (lacking inline dedupe and many other basic features) and the FlashRay product most definitely wouldn't be in development.

            All of the technologies being discussed relate to AFA's, SSD warranties, wear leveling, wide striping, sparing, inline dedupe, raid levels etc. it's actually the Netapp faithful who keep flipping the conversation between protocols, file, object, cloud, post process dedupe etc in order to construct a defensive narrative that fits Netapp's traditional sweetspot.

            BTW many of those comments weren't actually mine, but I do agree with most.......good luck with that IBM relationship :-)

    2. Anonymous Coward
      Anonymous Coward

      People in glass houses shouldn't throw stones.Netapp not only have multiple product lines but all of them other than classic Ontap have been from multiple acquisitions.

      1. Anonymous Coward
        Anonymous Coward

        RE: People in glass houses shouldn't throw stones

        You are correct, NetApp has TWO main storage products, one developed in house, one from acquisition. HP has ZERO developed in house out of their FOUR main shared storage product lines.

        I am not sure if HP EVER developed their own storage product unless it is pre MSA/EVA days since those are OEM'd from Dot Hill. On the contrary though, there are 12 companies that OEM NetApp's storage and resell it as their own. Must not be too bad of product if 12 other companies are reselling it as theirs also.

        Note that HP, Dell, EMC and IBM all have ZERO companies who OEM their product.

        PS, I am not throwing stones, I am returning them to someone who is desperate and attacking NetApp's products with information that is inaccurate. Once they stop posting FUD, I will stop posting FACTS.

        1. gazthejourno (Written by Reg staff)

          Re: RE: People in glass houses shouldn't throw stones

          Would you care to disclose the nature of your relationship with NetApp, given that your posting history features little except stout defences of that poor, downtrodden firm?

        2. Anonymous Coward
          Anonymous Coward

          Re: RE: People in glass houses shouldn't throw stones

          Hmm who are those OEM's please name them ?

          I'd be really interested to understand if they are big storage players or just some niche guys who need a cheap back end low feature storage solution to support their more valuable software stack.

          1. Anonymous Coward
            Anonymous Coward

            RE: Hmm who are those OEM's please name them ?

            IBM, Dell, Oracle, SGI, TeraData, Bull, Depot, EVA, NEI, RAID Inc., T-Platforms.

            I think IBM, Dell, Oracle (via SDK) count as big storage players.

            1. Anonymous Coward
              Anonymous Coward

              Re: RE: Hmm who are those OEM's please name them ?

              But at least two of those big storage players Dell & IBM only take the very lowest end of low end boxes. Not sure about Oracle's storage (who is) but they're appliance model is typically more about DAS anyway. And the rest....any thoughts on why none of them license the mighty Ontap ? Like I said in the first place, low end, low feature storage yawn Dothill & Xyratex do pretty much the same for many of the above also.

  14. Anonymous Coward
    Anonymous Coward

    Does this rather desperate rebranding signal the delay of FlasRay ?

This topic is closed for new posts.

Other stories you might like