back to article NetApp, tuck yourself in – your mid-range is showing: New FAS8000 on sale, ONTAP updated

As expected NetApp has launched its FAS8000 mid-range array, a point ONTAP software release, and FlexArray virtualization software. The FAS8000 is a scale-out design with 24-node clusters and hybrid cloud deployments in mind. It has V-series functionality, which virtualizes third-party arrays via a NetApp head unit, included …

COMMENTS

This topic is closed for new posts.
  1. Lusty

    Awesome, now where's that Azure integration?!

  2. StorageArch

    Availability

    The FAS8000 is available to quote and order immediately with Data ONTAP 8.2.1 RC2. Shipments are planned to begin March 2014. Starting in May 2014, FAS8000 systems will be orderable as Factory Configured Clusters (FCCs).

  3. Nate Amsden

    how different is the design?

    I mean aren't most NetApp arrays basically just X86-64 servers with ram, nvram, pci express etc? How does this system differ in design so that it is more "built for the clustering" ? Is the software that runs on top somehow different than other NetApp arrays with the clustering software? I wouldn't expect it to be since that is one of NetApp's claims to fame.

    Just seems like it is the same with just more powerful hardware...

    1. Anonymous Coward
      Anonymous Coward

      Re: how different is the design?

      I think what is meant by that is there are built in ports for clustering whereas in the previous models cards had to be added to support the cluster interconnects. I suspect it is more than that, but that is part of it. The faster processors and additional memory probably doesn't hurt either.

      1. 4bitsNnibbles

        Re: how different is the design?

        No, previous models had dedicated network ports for c-mode cluster interconnect as well. Maybe you're referring to the CF ports in a failover-pair?

    2. 4bitsNnibbles

      Re: how different is the design?

      Most <insert storage vendor here> arrays *are* x86-64 servers with ran, nvram, pci, interconnects, etc. Back in the day, Netapp filers used to run the gamut of CPUs (MIPS, SPARC, Alpha, etc.), but this hasn't been the case in quite a quile. As far as I've seen (experienced with EMC, Netapp, Hitachi, Nimble arrays only), all arrays are x86 boxes interconnected, 'clustered' somehow, speaking open storage protocols as everyone else. On the SAN side maybe the differentiator is the cache card design, and fault tolerance design of FA boards, but I doubt there's much original engineering there at the chip level; most of the originality is in hardware design, and most of the magic is in software.

      I believe Netapp is talking about their Clustered ONTAP mode of operation (24 nodes speaking to each other in one cluster) as opposed to 7 mode (2 nodes in a cluster-failover pair). As far as I know, C-mode was already available in 8.2 and maybe earlier, so not sure what sets this release apart.

      1. StorageArch

        Re: how different is the design?

        Clarification: EMC, NetApp, and Nimble arrays ARE Intel x86 64bit servers running customized OS'es. Hitachi storage systems are FPGA/ASIC based, so are NetApp E-Series (ASIC). They do use smaller Intel processors for management commands, but have wider parallel busses because of their design. That is way those systems can drive data quickly an efficiently to and from disk with little need for cache, unlike Intel-based systems. Arguably, some applications do use cache, so cache is available. However, Intel's Sandy-bridge technology is an attempt to emulate the efficiency of FPGA/ASIC parallel functionality. Look up how QPI links work and then FPGA/ASIC. Consequently, two Hitachi systems can do the equivalent work and scalability of 10+ NetApp FAS systems, without adding in cache cards. NetApp E-Series outperforms the FAS hands down. Thats why they use it for High Performance Computing and Big Data.

    3. Lusty

      Re: how different is the design?

      It's gone to PCIE v3 which is a big difference in terms of available bandwidth, something you want in a SAN if you're putting oodles of high speed ports into it. They have changed from FC8Gb to CE10/16Gb ports as standard and added a few 10GbE ports as well. I think that the new models essentially require fewer add on cards to create a viable cDOT design than the older models, certainly the 32xx models.

      They use considerably less power than the older models too

    4. Anonymous Coward
      Anonymous Coward

      Re: how different is the design?

      It's more of the same with a standard hardware refresh and apparent performance increase courtesy of Intel. The usual doubling of cores therefore the marketing message is we now go twice as fast for the same money (as if). Nothing earth shattering here, just a tin refresh of a fairly tired platform combined with lots of smarketing hoopla.

  4. Anonymous Coward
    Anonymous Coward

    A couple of comments

    Former customer here.

    1. Sandy Bridge is N-1 generation and the latest processor technology -- a moot point if you consider the storage controller as a black box, but don't claim support for the latest and greatest. This is being used by Netapp reps (and in an earlier article -- http://www.theregister.co.uk/2014/02/14/netapps_fas8000_specs/).

    2. These boxes are a performance bump with the midrange models supporting more flash and all models supporting fewer PCIe expansion slots -- 8 & 4 per HA pair for the FAS8060/40 and FAS8020 respectively. I would like to know what's the relative performance of the FAS8020 vs. FAS3250 and the FAS8040 vs. FAS6220. It seems like a good $/performance improvement if they can deliver the 3250 performance in the 3220 price point.

    3. I can appreciate the addition flash support, but why can't I get higher Flash Cache capacities vs. Flash Pool capacities. There are still lots of caveats on the total flash supported.

    4. Can I use Metrocluster with FAS8020? I like Metrocluster. Still one of the more relevant and relatively simple synchronous HA technologies out there.

    1. Anonymous Coward
      Anonymous Coward

      Re: A couple of comments

      All these new CPU's, PCI-E Gen 3, larger caches, 16Gb HBA's, new optimized O/S, etc and the FAS8040 SPC-1 results @ 86,072 SPC-1 IOps can't even match a 3PAR F400 midrange system, a product launched 5 years ago @ 93,050 SPC-1 IOps. That's just the performance, on the capacity front the Netapp capacity is less than half utilized, I'm pretty surprised Netapp even published this one.

      1. Anonymous Coward
        Anonymous Coward

        Re: A couple of comments

        Nice try but let's actually look at the details you're using as a comparison.

        - 3PAR = 384 x 15k drives mirrored vs NetApp = 195 x10k drives RAID-DP (far more usable capacity just from RAID. The drive size is irrelevant imho even though it's in NetApp's favor as well)

        - 3PAR = 4 x F400 storage nodes vs NetApp = 2 x 8040 storage nodes (1/2 the nodes used but can scale to 8 for SAN. Assumption of 2x performance would put NetApp at 2x the performance vs 3PAR at the same node count but about ~5x the usable capacity for same or less cost. Pretty sweet deal.)

        - 3PAR = $548k but doesn't state discounted or list cost vs NetApp = $495k list price (also about 2.5x+ the usable capacity at that price). If 3PAR is discounted then the discussion is very different especially if it's a typical 50-60% discount on 3PAR. Then the discussion drastically changes.

        - 3PAR = Higher latency at their reported number so let's compare at similar latency.

        -- ~3ms range : 3PAR@46k vs NetApp@68k

        -- ~6ms range : 3PAR@~79k vs NetApp@86k

        NetApp does have 512GB FlashCache which does help and 3PAR doesn't have that much caching or any tiering in use.

        Bickering between benchmarks can go on and on but it's critical to do it objectively. Compare the real results and actually read the details and understand the numbers along with the relevance to standard (real world) configurations. Both configurations appear pretty standard for each vendor which is good and happy to see they aren't examples of benchmark queen configurations. That gets annoying. Not disclosing if pricing is list or discounted by 3PAR (unless I missed that) is misleading as an accurate comparison cannot be done. The 3PAR result is a bit old but not a good comparison as it has far less usable capacity, more controllers, no disclosure on list vs discounted price, and performance at similar latency isn't that close (but not too far either).

        Understanding how SPC-1 works would be good to know before making claims or actually looking at the details.Vendors like to use them to tout what aspect of it works in their favor which is why it would be better to apply additional standards to the benchmarks like always having to show list price, performance at standardized latency marks (1ms, 3ms, 5ms, 10ms). Obviously I've taken too much time just comparing these numbers but didn't feel right letting such a misleading statement being left alone. Now to get back to my real job :-)

        1. Anonymous Coward
          Anonymous Coward

          Re: A couple of comments

          Nice try as I said the F400 benchmark is 5 years old, running on a PCI-X bus, yes it' has four controllers but those were designed over 5 years ago and didn't have the benefit of the latest generation multicore Intel CPU's, high memory count, PCI-E gen3 16Gb HBA's etc.

          Those four controllers on the 3PAR truly are a single system, not individual failover pairs cobbled together and I'd like to see some proof of the linear scaling you assume on Netapp. On the disk front the F400 had more spinning disk because at that time there was no concept of flash cache which is why latency was higher.

          Yes the disks were smaller because this was 5 years ago but 3PAR tested against 85% plus of the deployed capacity, whereas Netapp was sub 50%. Regardless of how you cut this Netapp's brand new 8040 had worse performance and worse capacity utilization than a 5+ year old discontinued platform. Pricing is pretty close but as you say we don't know the discount but pricing is probably the last thing you could directly compare given the benchmark was run in June 2009.

          And yes I do understand how the SPC-1 works and how to make the comparisons, that's why I stated I was surprised Netapp actually published this, it's not like them to submit something like this without some spin - lower latency, better utilization, snapshots turned on etc. .

          1. Lusty

            Re: A couple of comments

            "at that time there was no concept of flash cache which is why latency was higher."

            No, it was called PAM in the old days :)

        2. Anonymous Coward
          Anonymous Coward

          Re: A couple of comments

          I see what you were doing there bringing in usable capacity based on the larger disk capacities used in the Netapp test rather than the more appropriate tested capacity (Application Utilization) - short stroking anyone. 3PAR's 5 year old benchmark tested against 27TB's vs Netapp's 32TB's, the actual raw to tested utilization figure were 96% for 3PAR and 37% for Netapp, now why would they want to do that ?

          Note SPC-1's rule that "Unused storage ratio may not exceed 45%", Netapp were at 42% so sailing very close to the wind on that one, I suppose testing against less capacity means a smaller cache is required, which in turn brings down the price and purely coincidentally their eventual SPC-1 $ per IOPS number.

          I'm not particularly knocking Netapp here, I genuinely think they could have gone faster I'm just struggling to understand how this was worthwhile from their perspective.

    2. StorageArch

      Re: A couple of comments

      I looked at the posted Specsfs2008 numbers. Only a 4K IOPS improvement on their CIFS numbers. Hardly a reason to run out and replace existing systems. NetApp will use the new systems for new customers and to replace out legacy systems. Note that each time they come out with an update to the OS, it only runs on the newer hardware, so eventually the customer has to replace to get functionality.

  5. Anonymous Coward
    Anonymous Coward

    Performance accelerator module, no wonder they renamed it !

    Seems even 5 years later and it's still struggling to boost the latest generation of filers beyond the performance of an obsolete array from what at the time, was a relatively small and cash strapped niche vendor. I'm not surprised Netapp are haemorhaging staff, no longer just sales sick of trying to shoe horn the mess that is cluster mode to customers, but now also plenty of dyed in the wool techies.

This topic is closed for new posts.

Biting the hand that feeds IT © 1998–2019