back to article NetApp: Steenkin' benchmarks – we're quicker than 3PAR

Playing the latency card, NetApp says it has a faster array than 3PAR – despite having a lower SPC-1 benchmark score. A 6-node FAS6240 cluster SPC-1 benchmark (PDF) result has been posted. SPC-1 is a storage array benchmark measuring the number of IOs per second (IOPS) to data accessed over Fibre Channel as blocks rather than …

COMMENTS

This topic is closed for new posts.
  1. Lusty

    NetApp

    I don't know why NetApp have made a fuss of this, nobody in their right mind buys a NetApp because of performance. As the article states, they have perfectly acceptable performance but nothing special. NetApp should be focusing on all of the other stuff they have that everyone else doesn't - backup being the main one, with their integrated SAN backup being streets ahead of the competition. NetApp are also very cheap if you use all the features but again they don't mention this enough and so everyone thinks they are very expensive because as a standalone array they are expensive.

  2. J.T

    NetApp: 70,481.804 GB of Unused Storage

    3Par: 33,592.791 GB of Unused Storage

    IBM: 38,979.496 GB of Unused Storage

    Just keep that in mind. Great copy/paste of D from NetApp's blog though Chris. Although one thing about 50% markdown from list, this is a company with shrinking market share. Maybe part of that is pricing.

    Nice throughput and grats on the 6 way cluster though. You guys should keep going with the IOPs in that testing, see where the actual performance fall off ends up.

    1. Hard_Facts

      What is Netapps trying to prove to prostective buyers of storage ?

      Ramsan and the likes have shown how these results gets boosted. Same with vendors publishing benchmarks on their storage with SSD only disks..

      To get the results and lopsided comparison vis-a-vis other benchmarks, is at what expense

      1. "Single NetApps FAS6240 Storage" Benchmark: Total 193TB storage on the Netapps (70TB unused) & 1TB Flash to assist the 123TB actual storage used in this configuration for the benchmark.

      2. "Single 3PAR V800 Storage" Benchmark: Total 573TB storage on 3PAR (83TB Unused) & 0.768TB of Cache to assist 490TB actual storage used in this configuration for the benchmark.

      3. "A storage Cluster of 16 IBM V7000" Benchmark: Total 281TB storage across 16 Node V7000 (81TB unused) & 0.448TB of cache (192GB on 8 SVC + 256GB across 16 V7000) to assist 200TB actual storage used in this configuration for the benchmark.

      All three delivers 3-7 ms Response time, But

      One uses 57% of all Raw storage (123TB Usable) with 1TB Flash to deliver under 3ms Response Time @90% Load

      One uses 84% of all Raw Storage (490TB) & delivers under 7ms Response time @90% Load

      One uses 70% of all Raw Storage (200TB) & delivers under 6ms Response time @90% Load

      Pretty clear what are the "PAINS for the Supposed GAINS" for such performances --

      1. Use all of what storage you buy & get get optimal performance ?

      2. Do not use "half to third" of what storage you buy ?

      3. Pay for flash assisted Performance by putting more GB of Flash & Cache per TB of Storage you intend to you?

      As a buyer, I would rather compare: Cost per total Usable storage at an optimal performance of say Sub 10ms Response time

      IOPS Physical Usable Capacity Price $ / U $ / P

      Capacity (P) (ASU+DP) (U) $

      ----------------------------------------------------------------------------------------------------------------------------------

      Netapps FAS6240 250039 193 98 1,672,602 8666 17,067

      IBM V7000 520044 282 237 3,598,956 15185 15,185

      3PAR V800 450212 573 528 2,965,892 5617 5,617

      If I need a Sub 3ms response time for a specific application with a 5TB usable capacity need, I may consider a RAMSAN or a smaller stoarge with lots of SD & Flash/Cache.

  3. Anonymous Coward
    Anonymous Coward

    IBM's doing it again

    Looks like everyone is starting to play 'how's got the biggest benchmark' by throwing HW at the problem... Happened with the TPC benchmarks in the 2000s. Nobody is buying this stuff except for may be 10 customers.

    Vendors -- how about something more realistic that most of the market can use -- say what can I get $50K, $100K, $200K or $500K NET (HW+ storage + SW + misc) ...$/IOPS for unrealistic supermodel configurations isn't useful to the market...

  4. Anonymous Coward
    Anonymous Coward

    Flash vendors

    I expect new flash vendors such as Whiptail, Pure, NImble, Solidfire etc being able to put some remarkable benchmark and $/IOPS numbers compared to the ye' ole' disk boys.

    New guys... Why not throw a completely ridiculous configuration in the mix and send everyone back to the drawing board on how to put more effort on a customer friendly benchmark... TMS has done that, but they are still not top dog (although seems like their $/IOPS is still unbeaten).

  5. Anonymous Coward
    Anonymous Coward

    It's about performance, not about storage...

    Building an array to deliver a certain amount of performance to support applications is the usual build case for these kinds of system. trading storage overhead (wasted storage/unused storage) for lower latency at a given workload seems like the cheapest way to address the issue.

    Or, as I heard one person put it: "this is the price you pay for performance, the storage that comes with it is free" ;)

    No one writes stories about brave IT directors and their proud benchmarks. Real solutions for real problems are what matter.

  6. Anonymous Coward
    Anonymous Coward

    How?

    The 3PAR has the same latency as the NetApp at 250,000 iops. Difference is, the 3PAR keeps going. Not something I would shout about

  7. Adam 61
    FAIL

    Bad choice of benchmark

    Clearly this meant for people with a low attention to detail threshold. As per above anyone with a degree of intelligence can easily see through the arguments presented here. What is Netapps latency at 450,000 iop's? Oh they don't go that high......

    Must try harder

    1. dikrek
      Stop

      Explanation of 100% load

      Hello all, Dimitris from NetApp here (the person that wrote the article Chris referenced).

      It's here: http://bit.ly/Mp4uu0

      Anyway, to clarify since there seems to be some confusion:

      The 100% load point is a bit of a misnomer since it doesn't really

      mean the arrays tested were maxed out. Indeed, most of the arrays

      mentioned could sustain a bigger workload given higher latencies. 3Par

      just decided to show what the IOPS at that much higher latency level would be.

      The SPC-1 load generators are told to run at a specific target IOPS and

      that is chosen to be the load level, the goal being to balance cost, IOPS

      and latency.

      So yes, it absolutely makes sense to look at the gear necessary to achieve the requisite IOPS vs latency, and the cost to do so.

      Databases like their low latency.

      And yes, all-SSD arrays will of course provide overall lower latency - usually though one can't afford a reliable, enterprise all-SSD array for a significant amount of storage. You think the prices listed are high? Think again.

      What NetApp does with the combination of megacaching and WAFL write acceleration is provide a lot of the benefit of an all-flash architecture at a far more palatable cost (especially given all the other features NetApp has).

      D

  8. radurb
    Thumb Up

    The inevitable march to the top

    If you observe NetApp's progression over the years, the conclusion is inevitable. They started out as a workgroup NAS vendor, then continued adding more and more enterprise-grade functionality, reliability and finally now, enterprise-class performance. Make no mistake, this is a Tier1 result from what is now a Tier1 SAN *and* NAS vendor.

    See JJM's latest article for further affirmation:

    http://www.storagenewsletter.com/news/marketreport/western-european-external-disk-idc-1q12

    For me, this benchmark result is nothing more than irrefutable proof that after many years of development, C-Mode represents a new tranche of scalability for Data ONTAP. I can see why those being disrupted by NetApp are upset and try to discredit the results via scattered nit-picking, but from an objective perspective, at the moment no one else can match NetApp's combination of storage functionality, efficiency and performance at scale.

    Which is what makes it my favorite array to work with.

    1. Anonymous Coward
      Anonymous Coward

      Re: The inevitable march to the top

      Grammatically better than average -- check...

      Exceeds 50 words -- check...

      Has a follow-up link that makes sense -- check...

      Correct spelling of netapps -- check...

      Put a '-' between C and mode -- check...

      Emphasize AND with * -- check...

      Multiple posts on netapps -- check...

      Are you an IT guy or a vendor troll, Radur? Please disclose.

This topic is closed for new posts.

Other stories you might like