back to article Just how much bang does a FAS8040 box give you for 500,000 bucks

A just-announced two-node NetApp FAS8040 cluster scored 86,072.33 IOPS [PDF] on the SPC-1 random I/O storage benchmark, with linear scale-out extrapolation suggesting it could do better than the FAS6240's performance. The list price for the system was a smidgen below half a million bucks at $495,652.43, giving a list price/ …

COMMENTS

This topic is closed for new posts.
  1. Dave Hilling

    wow

    I could be wrong but I am pretty sure some of those systems above it are quite a bit cheaper as well.

    1. Meanbean

      Re: wow

      It's unlikely they'd be cheaper especially as some of them are all SSD loaded, don't forget the Netapp price was list which is not near what you'd pay in the real world. This is an impressive spec given that this is purely SAS disks albeit the 512gb Cache card in each controller

  2. smt789

    Man, we just quoted a Tegile box to a client doing 60K IOPs over NFS with 3yr sppt for under $100K (US). I know they have boxes that are in the 100-250K IOPs range and still under $200K (US).

    1. MadMike

      Tegile runs OpenSolaris + ZFS

      Tegile is running OpenSolaris and ZFS on their boxes, drawing upon the billions of research and decades that Sun has poured into ZFS and Solaris, so I understand that Tegile is cheap, whilst delivering good performance. Good value for the money you can say. It would be another thing if Tegile developed everything from scratch, that would be very expensive trying to recoup decades of research.

      Tegile has a very good dedup engine implemented in ZFS, it is different from the current dedupe engine in ZFS.

      1. Levente Szileszky

        Re: Tegile runs OpenSolaris + ZFS

        Also Tegile is getting SMB 3.0 ready, slated to be fully supported in 2H14 as I recall, instead of supporting cherry-picked features like NetApp did in v8.2: http://www.netapp.com/us/media/tr-4175.pdf

  3. Anonymous Coward
    Anonymous Coward

    Oooo flash-ish-ier?

    I'll keep this on the non-technical side..

    Those performance numbers are cumulative and not representative of performance you'd get from a single volume or single controller. So in sense it's misleading, but no different then and active/active cluster. To be honest, lets say you have 4 node cluster, but at the time the only volumes busy are on 'controller 1' then you're looking at 1/4 the total performance. This can happen even with data that's been laid out and optimized to be spread across multiple controllers. This would be an example of: Humans deciding to do crazy things with their data.

    A volume is tied to a single controller. There is not easy way to move a volume from a heavily loaded controller to a less loaded one without copying the data to another stack. Load shares kill the cpu, the copies are read-only and eat up extra badly needed space.

    I know they cannot do this without a complete rewrite, but making a way for volumes to move dynamically between controllers without copying all the data would be huge, especially when you have some nodes in the cluster consistently more loaded then others.

    More memory, cpu and flash help but the underlying issue that is slowly becoming uncovered is that the architecture is not keeping up with the trends. Netapp has helped itself short term with cluster mode and more flash but MUCH MORE is needed. For example look at other clustered filesystems and hadoop and what they promise to offer.

    1. Anonymous Coward
      Anonymous Coward

      Re: Oooo flash-ish-ier?

      How is vmware handling similar issues with VMs? Is a VM spread across several hosts? No. it runs on a single host and can be migrated based on performance metrics.This is no different here. netapp certainly doesn't have the depth on the tools to do this yet, but they are coming. Maybe in the future netapp movements become even more granular and that'll be great. we;ve been at the forefront of cluster deployments and have tried many, many, many clustered solutions over that last 10 years some with decent results, while others were absolute disasters but what we've found out the last 2 years using it, is that cluster ontap has been the most flexible one, at least in our case. Not everything is about performance and cores, and asics and fancy names, in fact most of it it's not, but it's almost always about architectural flexibility, integration with 3rd party apps, recovery, business continuity and the ability to do things that take downtime right out of the equation and cluster ontap is very good at all of the these. Ok, so they may not win the performance benchmark wars, i'm not sure that's goal anyway, but it's certainly not a slouch by any means, and excels in 5-6 other areas that most other solutions simply can't come close, today.

      Respectfully,

      Mikael

  4. Nate Amsden

    interesting comparisons

    To the five year old 3PAR F400.

    NetApp performance: 86k IOPS

    3PAR performance: 93k IOPS

    NetApp usable capacity: 32TB (450GB disks)

    3PAR: 27TB (147GB disks)

    NetApp unused storage ratio: 42% (this is fairly typical for NetApp systems on SPC-1 from what I've seen)

    3PAR unused storage ratio: 0.03% (numbers available in the big full disclosure document , full disclosure document not available for NetApp system yet)

    NetApp Price: $495k (this is list)

    3PAR Price: $548k (I assume this is discounted, though there is no specific reference to list or discounted pricing in the disclosure that I can see readily available - obviously the pricing is 5 years old and the F400 is an end of life product no longer available to purchase as of November 2013).

    Last I saw the NetApp clustering was not much more than what I'd consider workgroup clustering, sort of like how a vmware cluster is, that is a volume doesn't span more than a single node (or perhaps node pair but in NetApp world I think it's still a single node). I believe if your using NFS then you could use a global namespace across cluster nodes perhaps and span that, but that's more of a hack then tightly integrated clustering.

    I admit I do not keep up to date on the latest and greatest out of NetApp, but about 18 months ago I was able to ask a lot of good questions of a NetApp architect(I think he was at the time at least) specifically around their clustering and got good responses -

    http://datacenterdude.com/netapp/netapp-dataontap-81-reponse/

    Of course that is Ontap 8.1, according to this article on el reg the latest is 8.2, so I'd wager there can't be anything too revolutionary in version increment of 0.1 from an architecture perspective at least.

    http://www.theregister.co.uk/2014/02/19/netapp_fas8000_midrange_box/

    I don't mean to try to start a flame war or anything but I found the comparison interesting to myself, having dug a bit into SPC-1 results over the past few years, the disclosures are quite informative, which is why I find it's a useful test that goes beyond the headline numbers.

    1. Anonymous Coward
      Anonymous Coward

      Re: interesting comparisons

      The FAS8000 was benchmarked with 2 x 512GB Flash Cache adapters, not just HDDs so it's not accurate to compare it to previous all HDD SPC-1 benchmarks

    2. dikrek
      Stop

      About the SPC-1 benchmark

      Hello all, Dimitris from NetApp here (recoverymonkey.org). I posted some of the stuff below on Nate's site but it's also germane here.

      FYI: The SPC-1 benchmark "IOPS" are at 60% writes and are NOT a uniform I/O size, nor are they all random. So, for whoever is comparing SPC-1 ops to generic IOPS listed by other vendors - please don't. It's not correct.

      Some background on performance: http://recoverymonkey.org/2012/07/26/an-explanation-of-iops-and-latency/

      I have plenty of SPC-1 analyses on my site. I’ll post something soon on the new ones…

      BTW: The “Application Utilization” is the far more interesting metric. RAID10 systems will always have this under 50% since, by definition, RAID10 loses half the capacity.

      The Application Utilization in the recent NetApp 8040 benchmark was 37.11%, similar and even better than many other systems (for example, the HDS VSP had an Application Utilization of 29.25%. The 3Par F400 result had a very good Application Utilization of 47.97%, the 3Par V800 was 40.22% and the 3Par 7400 had an Application Utilization of 35.23%.)

      The fact that we can perform so fast on a benchmark that’s 60% writes, where every other vendor needs RAID10 to play, says something about NetApp technology.

      Thx

      D

      1. Trevor_Pott Gold badge

        Re: About the SPC-1 benchmark

        Gotta +1 Dimitris on this. It's actually a pretty decent amount of IOPS for SPC-1.

        A lot of IOPS numbers are either standard 80% or 75% read with a uniform block size, or they are 100% random write with a uniform block size. Most benchmarks also maintain a steady queue depth which dramatically helps with getting high IOPS numbers.

        While I'd say the price for this Netapp box is moderately high for the performance delivered, it's still hella-impressive performance. And frankly, for that performance, it's reasonably priced if your only consideration is traditional vendors.

    3. This post has been deleted by its author

      1. JohnMartin

        Re: Waffle

        -Disclosure NetApp Employee-

        Its effectively impossible to short stroke on WAFL, the entire disk platter gets used over time, and the extensive warmup phase pretty much ensures that has already happened.

        The trouble with disks is that they keep getting exponentially bigger without getting much if any faster, if anything they're likely to get slower over time. 10K drives are in the process of replacing 15K as the 15K SFF drives don't make much sense from a price-performance point of view.

        At some point you run out of performance well before you run out of capacity, the fact that because we use RAID-DP instead of RAID-10 means we hit the IOPS/GB limit with a lot more room to spare than in competing architectures that use RAID-10.

        While It is true that WAFL can turn unused capacity into write performance efficiency, you don't really need much more than 10% additional space on top of the standard 10% WAFL reserve for the it to work at very high efficiency (law of diminishing returns after that point). Net result, better than raid-10 write efficiency at for a overall sacrifice of a little over 30% purchased capacity including RAID, WAFL reserve and a little extra to make the write allocators job easier (vs a minimum of at least 50% for RAID-10). Does the free space help us get a good result, yes, but it's a by-product of $/GB density of disks (even accelerated by flash enhanced auto-tiering)

        I'd be interested if know of an all disk, or hybrid flash+disk result that gets bettter $/IOP/Tested Capacity, I did a quick check and couldnt see one.

        1. Anonymous Coward
          Anonymous Coward

          Re: Waffle

          Specifically Netapp suggests not using over 60% capacity of the disks for performance reasons as beyond that there is diminishing returns. The disks will get slower(it appears) if Netapp decides to use vendors that push out Shingled Magnetic Recording(SMR).

          As a user it is important for me to know how much performance I can get for a single volume not just the entire cluster. Just like it is important for me how much performance I can for single threaded performance on a CPU. If someone knows the break down of what operations will take place in a particular volume then from that performance IOPS can be determined.

          I'd be interested in knowing when Netapp is going to add an all flash array using Ontap and some variation of WAFL for flash. Don't sell me on the E-series stuff though. As it was stated disks keep getting bigger and are not getting faster (due to the law of physics). The second issue is that as we move to larger and larger generations of disks raid-DP loses its effectiveness. I think there is around another 5-6 years of raid-DP before additional data protections 'absolutely' need to be implemented -whatever that may be.

  5. Anonymous Coward
    Anonymous Coward

    I´m asking myself how a current low cost solution would perform playing with different RAID levels:

    - 1x Synology RackStation RS10613xs+

    - 8x Synology RX1213sas

    - 104x Western Digital Se 4TB, 7200rpm, 64MB cache (WD4000F9YZ)

    - price 50000,- incl. VAT

    1. Trevor_Pott Gold badge

      With SPC-1 benchmarks? Not even close. Synology systems do fantastic at a fixed block size with a queue depth of 4. They do even better in Hybrid mode with a "normal" workload profile (such as 75% read).

      But SPC-1 is a different animal. It isn't so much a factor of the disk speed as it is the RAID and caching algorithms. It throws all sorts of different blocks at high write levels with variable queue depths. I love Synology, but you'd looking at all-flash setups on their latest gear to get SPC-1 benches that high off them.

      It would be a quarter the cost of the NetApp, but it wouldn't scale worth a damn. That's the real deal here: NetApp is claiming linear scaling at this performance level for their cluster. Synology and other low-end solutions (with the possible exception of a handful of Server SAN solutions like Maxta, which are highly unlikely to give you the low latency and high IOPS on an SPC-1 test) don't scale. They are typically 2-node affairs doing block-level replication.

      If, however, the scalability doesn't matter, well...a pair of fully expanded RS3614xs+ units filled with Samsung EVO SSDs could likely take this NetApp outback and spank it, and for cheaper too. But again, bear in mind that limitation. NetApp can always add another node to the cluster. Synology cannot.

      Also: Synology needs Flash to get there. That's less of a problem today, when flash is so cheap, but it does go to show how much effort had to go into those old systems.

  6. Man Mountain

    Really!

    The F400 is a 5 year old, now obsolete, array using no SSD or flash and still beats this latest greatest array from NetApp. Really? If you were NetApp, you wouldn't even publish this! And surprised how poor the V7000 is without SSD.

  7. Anonymous Coward
    Anonymous Coward

    Also, if I look at the Netapp configuration, I only see one licence, FCP. I can't snaprestore, flexclone, use any snapmanager

    1. Anonymous Coward
      Anonymous Coward

      Focus

      This is the cost of the tested config for this benchmark. They also have FC switches in there which normally you wouldn't see since the majority of customers already have existing fabrics. Do other vendors provide all of theιρ SW for SPC1 testing? They don't.

  8. Anonymous Coward
    Anonymous Coward

    Rule of thumb: If you are not being attacked then you're not making an impact

    So i'm glad to see the trolls are still around

    1. Anonymous Coward
      Anonymous Coward

      Rule of thumb

      Rule of thumb: If you can't even beat a discontinued 5+ year old all disk platform with the assistance of flash, then you have no business running benchmarks.

      Netapp used to be able to ride this storm with fairly unique killer features, but in reality they just haven't innovated for the last four years. Regardless of cluster mode their basic building block of HA filers in a failover cluster is just no longer up to scratch. But due to their level of investment they have no intention of changing that, indeed they're actually exacerbating poor design through cluster mode.

      1. berserko1

        Re: Rule of thumb

        What you fail to realize That the F400 provided this performance using about 220 spindles and Netapp is providing nearly identical performance on ~80. I'd much rather rack and power 80 spindles over 220 spindles... Perhaps power is free where you are....

  9. Anonymous Coward
    Anonymous Coward

    Re: Rule of thumb

    At least go to the trouble of reading the disclosure reports before commenting, that way you can get the correct disk quantities before enlightening us with your green credentials.

    What you fail to realize, or choose to ignore, is that result was run back in 2009 when flashcache solutions weren't available to these platforms. Possibly someone will chime in and inform us Netapp had PAM back in the day, but show me an SPC-1 test using PAM from back then so we can compare.

    Despite Netapp having the leveler of a large flash cache, short stroking capacity, the latest generation of everything, including new multicore CPU's, more cache, the latest version of Ontap etc etc etc they still failed to surpass a 5 year old platform, Netapp sadly is no longer the force it was and it's the underlying architecture not the software letting it down.

    I fully accept power and cooling aren't free and will only increase in price, but seeing how badly the latest generation of Netapp compares against that 5 year old benchmark on an obsolete platform. I think I'd rather take my chances with one of the newer 3PAR 7000's with a bit of SSD.

    The 7400 using older non SSD optimized code with did 258,000 IOps, now that was an all SSD config so not really that practical for the masses, but even if I could get half of that (baby 7200 maybe) I'd still be caining the latest and greatest from Netapp.

    1. Meanbean

      Re: Rule of thumb

      Wouldn't it be great if all vendors were forced to use the same read / write ratio in these tests and the same block sizes. This information is available for all to see but not everyone understands why this has such an impact on the outputs. Performance also doesn't tell the entire story, I would bet that any CIO worth his salt would also be interested in understanding how a storage platform can almost become a transparent part of their architecture. In my experience most vendors can provide similar performance at a similar price once they get into a competitive situation, the real value comes from the software.

  10. Anonymous Coward
    Anonymous Coward

    But that's the whole point of SPC-1 it uses a standard repeatable workload with a mix of io sizes,read write ratios and random and sequential I/O as well as a price for the kit in the submission. There's also the exec summary and full disclosure docs. All of the above is to enable vendors testing kit to have a level playing field, and your right storage isn't all about performance. There's many more angles to selecting storage, just see see the above capacity discussions, but in this case the article and SPC-1 result are all about performance. But I suppose off you don't have the performance then you have to sway the discussion to features, but I'm not sure Netapp can win that conversation these days and If that's the case why even run the test knowing you can't compete, just do an EMC and pretend it doesn't exist (despite being a SPC member for years)

    1. Meanbean

      Some valid points, having read through the F400 benchmarks (yes 5 years old) this uses twice as many nodes and twice as many disks, given the cited linear scaling of clustered ONTAP they could go much higher. As mentioned earlier I'd much rather be paying the power, cooling and floor space on the Netapp. On the new benchmarks the 3PAR 10000 does also beat the Netapp 6240 but with 2 more controllers and 4 times as many disks, data-centre's are not cheap to run .

      There is plenty of choice in the market place which is great for customers but clients don't buy systems based on benchmarks they buy based on their workload and the truth is that whilst the start ups are without a doubt changing the industry for the better the big boys such as EMC and Netapp continue to execute and evolve. The winner in such a competitive environment will undoubtedly be the client.

      1. This post has been deleted by its author

      2. Anonymous Coward
        Anonymous Coward

        Yes uses twice as many nodes and more disks on the 3PAR, but each of those nodes is equipped with 7+ year old CPU, PCI-X bus, 4Gb fibre and minimal cache in comparison to the tested Netapp config. Also worth noting is that the 3PAR 10800 beats the Netapp 3TB flashcache equipped 6 node 6240 cluster by a wide margin (200,000 IOps). So we're not talking marginal gains here and again Netapp's unused storage percentage was 43%, only 2 points off being rejected by SPC-1.

        The cluster and linear scaling story on Netapp is a little disingenuous I think, each node owns it's own volumes, they're not really shared / load balanced across nodes within the cluster. Each nodes partner simply acts as a failover target for the other. In reality you have an active passive configuration based on multiple 2 node FAS failover clusters,with some unified management. For all intents and purposes these are individual systems under a common management layer.

  11. Cloud 9

    WAFL unable to scale?

    Lots of noise comparing the new top tier to the 5 year old 3Par system.

    What concerns me more here is that you can get 70% of the comparative IOPS on a FAS3170. Ancient mid range controller / top tier modern controller. I would have thought that if you stepped up to the next tier (two if you count the 6000 series) and added 6 years of hardware development that you'd get a more significant improvement.

    If the license costs are still tied to the tiering, this makes it even less appealing.

  12. Anonymous Coward
    Anonymous Coward

    Old wine in new bottles

    call us when you deliver some innovation.

This topic is closed for new posts.

Other stories you might like