back to article EMC blows benchmark away - again

EMC has blown another file-serving benchmark away with a result more than four times faster than the previous best. A pretty much all-flash VG8 (VNX gateway)/VNX 5700 array scored 661,951 operations per second on the SPECsfs2008 CIFS benchmark. The overall response time was 2.1msecs. SPECsfs2008 CIFS benchmark chart …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Thumb Down

    Realistic Test?

    "The best non-EMC result was achieved by a NetApp FAS3210 with 64,292 ops/sec and a 1.50 response time."

    Yes, but the NetApp benchmark only used 144 SATA drives and a PAMII module. Which is a considerable more realistic test than EMC's which used over 500 flash drives. What real value are EMC demonstrating by doing these tests? If they want to prove they are better than NetApp, why not do a like for like test?

  2. Freakyfeet
    Stop

    How can you compare the two ?

    "Once more an almost all-flash system has shown the door to disk-based competitors on a SPEC benchmark"

    In other news, a Ferrari 360 beat a Ford Mondeo in a 1 mile race.

    Funny how the cost isn't mentioned ?

    1. Storage_Person

      Tell Us the Money

      These benchmarks need to be overhauled so that they show $/IOPS (or equivalent) and $/GB (end-user available, not raw) at a number of capacity points such as 1TB, 10TB and 100TB. And all costs should be over 5 years so that they include whatever insane year 4 and 5 maintenance prices these vendors think they can get away with. Only then will they be useful for end-users to compare one product against another in something approaching their own environments.

  3. Nick 6
    Unhappy

    Silly configuration

    Silly: Not a like for like on comparison, and no real world equivalent.

  4. MarkA
    FAIL

    I call BS

    Really? You're not going to call out the price of the various solutions?

    How is that an actual benchmark? To be a benchmark it needs to be something that others strive for. only an idiot with more money than sense would put their file services on flash.

    I pity the fool, etc.

  5. Dwayne
    Go

    the sound of music

    ...I'm a geek and I am interested to see how specific vendor solutions can be scaled for performance... Spec benchmarks are a great way to showcase this.

    1. Anonymous Coward
      FAIL

      Not if...

      ... they benchmark against another system which hasn't also been scaled for performance. This is no difference to fitting a VW Golf with a ferrari engine and then comparing it to a ford fiesta... remarkably enough the one with the ferrari engine won.

      It's all bollocks anyway, as we all know, there isn't and won't be enough flash based storage available to replace spinny disks for many years to come.

      Fail, because... well it's yet another piece of marketing crap, that bears no relevance in the real world, where people have to actually pay money for it.

      1. Dwayne
        Go

        a bridge too far

        http://www.spec.org/sfs2008/

        Well - SPECsfs2008 is not a $/IOPS - It's performance benchmark and its pretty telling how a specific vendor solution can and will scale. When you (or I) develop an industry standard benchmark for $/IOPS against specific capacities, the benchmark would be no more real world than SPECsfs2008.... Just enjoy the fun and take it for what it is.

    2. Tom Maddox Silver badge
      Grenade

      I spotted the EMC employee!

      What do I win?

  6. thegreatsatan
    FAIL

    misleading

    its 4 VNX systems with 8 File Systems (none of which can share data between them) hardly the kind of rig any business would deploy.

    And for 6 million you get 60TB usable. Hell for that price point, I could deploy all DRAM and get twice the IOPS, faster writes and far greater reliability. Once again, marketing a system no one will ever buy as a valid benchmark.

    SPEC continues to be a joke as well.

    1. Dwayne
      Go

      Gone with the wind

      Nope - EMC like NetApp provides NAS via integrated or gateway. The SPECsfs2008 above is based on a gateway solution which scales from 1-8 data movers. Backend (VNX) storage array(s) can be independent.

  7. botts61
    WTF?

    Post what clients want to see

    I love reading the posts on this page to keep up to date on storage news.

    When I see posts like this showing one sided info based on configs that clients will never buy it frustrates me.

    I agree show that EMC has a new fast bench mark. But put it into perspective that it is a config that is worth 5-6 million and that no one in their right mind would buy.

  8. Anonymous Coward
    FAIL

    @Dwayne

    The trouble is, it may well all be a bit of fun. Vendors showing what their storage arrays are capable of etc. However, when EMC starts making claims that their storage out performs NetApp and pointing to the SPECsfs benchmarks it sets unrealistic expectations with some customers who don't know any better.

    1. Dwayne
      WTF?

      The fast and the furious

      LOL!! Thats what NetApp have been doing for years!

      http://blogs.netapp.com/dave/2008/02/controversy-net.html

  9. Mick Russom

    Days are numbered

    EMC's days of charging too much for storage are nearing an end.

    Network wide global filesystems implemented in a redundant distributed way on Direct Attached SSDs will make these relics a thing of the past.

    EMC's gear is irritatingly unreliable for the price and shamefully expensive with a byzantine software implementation.

  10. thecakeis(not)alie

    @Chris Mellor

    So, um...where the contest to win one of these beauties? Preferably open to people outside of just the UK/US/OZ?

    Completely unrelated: do you know how I can rig a contest for a bitchin' storage system?

  11. Lam Kuet Loong
    Thumb Down

    another with SSD?

    VNX .. still a Celerra DART code at the backend. Yes. it scales from 1 to 8 data movers with independent back-end controller aka CLARiiON (dual-controller). But .. NONE of the DATA MOVER Share the SAME FILESYSTEM. 16TB limit ? It is still a Failover datamover concept with Control Station initiate the failover? 3 active and 1 standby? standby that sits there idle and waiting for 1 of the datamover to fail ???

    I wonder if 1 use ZFS with similar number of SSD. I wonder if it deliver the same or better benchmark.

    Seriously marketing hypes. But then again... it has been a long time since EMC deliver such hypes.

    Yes it does show the scalability of the system. But then again, just pure NFS/CIFS without other real-world activities does not show the real system performance.

    Whoever really use so many SSD ... you put a highly tuned Unix/Linux should deliver similar.

  12. Anonymous Coward
    Grenade

    SSD Stockpile

    When you have such a large stockpile of SSDs from STEC you just gotta do something with them.

  13. WhoTheHeckCares?
    Grenade

    For some interesting spinny disk contrast, check out the NFS spec2008 results

    EMC only participates in benchmarks they can find a way to do well in. They are completely absent from the SPC-1 benchmark, where they would show very badly vs. 3PAR and HDS, among others. Amusingly enough, Netapp posted results for EMC Clariion a few years back, to prove this point (weeping and gnashing of teeth from EMC could not prevent the benchmark, check out SPC-1 results from Netapp to see the two results for EMC).

    That said, I note with interest that on the NFS side of the sfs2008 benchmark, the "flash-only" EMC absurdity produces not quite 500,000 whatevers. IBM's Scale-Out NAS (SONAS), using spinny disks, produced just over 400,000 of the same whatevers (albeit with much higher latency towards the upper end of the performance curve). And HP produced a very respectable 333,000 whatevers out of a collection of HP-UX blades and a bunch of spinny disks. Where is HP's x9000 scale-out NAS box, and why post results for a non-commercial NAS config, when HP has a whole NAS line under the Storageworks division? We can be speculate.

    Moving down the list, Blue-Arc both showed fairly pathetic numbers around 150,000 NFS whatevers for their top end box. Netapp only a little better at just under 200,000 whatevers at their high end, along with much lower numbers for their mid-tier offerings. And Isilon (seemingly a direct competitor for SONAS, and the leader in the scale-out space) only managed about 140,000 NFS whatevers. All very interesting, at least to me.

    As to whether or not ZFS would show as good or better results if outfitted with a huge collection of SSDs, I don't see Sun/Oracle represented at all, which seems odd since Sun had the ZFS-based J9000 line before Oracle bought them, and even has some SPC-1 benchmark results posted (with pathetic IOPS figures, but at such a low price point that the $/IOPS were not too bad). They really should show up somewhere on this sfs2008 report, either NFS or CIFS or both... but, alas, N/A.

    So, I guess I would like to see the graph that accompanied this story show both CIFS and NFS results, along with the various credible NAS vendors and all of their results, from either benchmark. I know the two benchmarks don't compare directly (CIFS "whatevers" don't equal NFS "whatevers"), but the NFS side does seem to be much more competitive, with at least one legitimate, commercially available spinny disk setup that almost matches EMC's best efforts at juking the benchmark.

    The Register did a decent job of sorting the NFS side out here: http://www.theregister.co.uk/2011/02/23/enc_vnx_secsfs2008_benchmark/

  14. Marc 1

    Cars.

    Come on guys.. you can't criticize EMC because other companies submitted something more 'real world'. Who brings a practical family car to a horsepower war and brags about the affordability or the practical aspect? You know there are those out there that by the seemingly unaffordable high-spec no-real-world purpose cars.

  15. Freakyfeet

    @Marc 1

    Absolutely agree - which is why I'm not criticising EMC for playing a blatant marketing game

    What is dissapointing is the blatant pro-EMC way in which this article has been written. Chuck must have got a new bottle of single malt ready at the bar in Cork ! ;-)

    1. Adam White

      pro-EMC?

      There's nothing pro-EMC, blatantly or otherwise that I can see in this article. It's six short paragraphs that basically repeat what is says in the graph.

      "EMC has blown another file-serving benchmark away with a result more than four times faster than the previous best."

      True

      "A pretty much all-flash VG8 (VNX gateway)/VNX 5700 array scored 661,951 operations per second on the SPECsfs2008 CIFS benchmark. The overall response time was 2.1msecs."

      True

      "The tested system had 560 x 200GB flash drives and 21 x 300GB, 15,000rpm SAS disk drives for a total of 101.2TB and eight file systems. The total exported capacity was 77.455TB. The VNX5700 had five X-blades, one of which was a stand-by blade."

      True

      etc etc

      There's no analysis that says "the whole thing is too expensive to imagine and has no real world benefit" but surely we're clever enough to decide for ourselves what the implications of this new benchmark may or may not be.

      1. Colin_L

        the purpose is marketing

        EMC wants to create/reinforce the idea that EMC is 'fast' and doesn't care that customers will purchase spinning disk configurations that probably cannot deliever a tiny fraction of this level of performance.

        I repeat: they don't care.

        It's the same as base model and 'sport' model cars. One of them is the deliever high performance, in many cases well beyond what the owner/driver can extract and certainly beyond legal use of public roads, and the base model is to capitalize on the image of the car at a lower price point.

        EMC wants to sell VNXs and it ideally would have customers who naively believe that they are 'better' and 'faster' than the competition. If you look at the SP utilization on a NS-480 or -960, I didn't think the VNX's massively more powerful processors was about crushing benchmarks with SSD. I think those powerful processors are completely necessary to be able to use thin provisioning, automatic storage tiering and, someday, block-level dedupe using primarily spinning disk.

This topic is closed for new posts.

Other stories you might like