back to article One day later: EMC declares war on all-flash array, server flash card rivals

EMC is announcing its XtremIO all-flash array and, just 24 hours after Violin Memory's PCIe card launch, a line of XtremSF server flash cards with XtremSW Cache software that effectively expands and renames the VFCache server flash card product line. It appears that the planned Project Thunder server networked flash cache …

COMMENTS

This topic is closed for new posts.
  1. Nate Amsden

    any real time compression?

    for their XtremIO product.. dedupe is nice for VDI though much less useful in other workloads (at least workloads that I work with). Though real time compression would provide a better general data reduction system.

    Most of the new startups have both dedupe and compression.

    I don't see mention of compression in the article, not sure if it's not included or just a minor oversight.

    1. This post has been deleted by its author

  2. Anonymous Coward
    Anonymous Coward

    Bricks

    Why do they call their storage Bricks? It seems like there would be a better descriptive name.

    Or perhaps, that is the descriptive name... Brick.

    8 Bricks = 1Million IOPS. Obviously the customer minded EMC will focus on selling on value.

    Since other Vendors are already selling array units that can achieve 1 Million IOPS in only 3RU.

  3. Man Mountain

    You say that HP have an all flash array in development - it's already available! You can have a 3PAR array with all SSD drives if you wanted. But the nice thing is, it isn't just an all flash array ... the same mature, stable, highly functional platform can be a hybrid array, or a traditional array. HP are not having to acquire a small flash player, or start from scratch, the same platform hits Tier 1, Tier 2 and all-flash requirements. HP will not be introducing a new product as the existing products already do all this.

    1. Anonymous Coward
      Anonymous Coward

      "HP will not be introducing a new product as the existing products already do all this."

      No, they don't. Putting Flash drives in a disk array is not the same thing as PCIe MLC based array, which others, such as IBM's TMS RamSan, have been developing for years as a full function storage system with RAID, QoS, mirroring, etc with software to send preferred reads and other Flash amenable I/O to Flash and the rest to disk in a hybrid approach if that is what you want. EMC, and everyone else, has been throwing Flash drives in disk arrays for years, which is what 3PAR does. There is a difference between a Flash array and a disk array with Flash drives, latency being the largest difference.

      1. Man Mountain

        See, I thought this article was mostly about XtremeIO. Each brick apparently contains 16 x 200GB SSD's but now you're telling me that this isn't a flash array. Sounds pretty flashy to me, as does a StoreServ 7400 with 200 x 200GB SSDs.

        And there is a difference between throwing a few SSDs in a disk array and having an architecture that lends itself to supporting very large numbers of SSDs! Throwing a few SSDs in an array will not delivery hundreds of thousands of IOPS at sub millisecond latency ... StoreServ will. Just because HP don't need to bring to market a new specific flash array, doesn't mean they don't have one already!

        1. Anonymous Coward
          Anonymous Coward

          They mentioned the XtremeSF cards and Violin's PCIe based arrays too. If we are just talking about SSDs in a SAN array with an 8g network bottleneck, then everyone has a Flash array already and has for years. I agree that XtremeIO is in the mold of the traditional SAN array, but point being that HP doesn't have anything today which can compete with the Flash array providers, e.g. IBM's RamSan or Violin. I think they still rebrand Violin unless they pulled the plug on that relationship.

          1. Man Mountain

            HP sold Violin arrays in theory only, the relationship never got off the ground.

            In terms of latency, if you want to share Violin arrays across a few servers then you have to introduce a gateway (x86) server which introduces more latency, and then you still have the fabric latency as well. That's a worse solution than having a genuine storage array that can delivery ridiculous performance and share that as standard. If you just have one server wanting blistering performance then maybe I can see your point, but to be honest, there are even better and lower latency ways of achieving that than Violin.

            And the bottleneck in traditional arrays with SSD is the controller not the network. VNX, EVA, etc, all flood the controllers after only a very small number of SSDs. 3PAR can support 200 SSDs without that issue. You're picking at straws. The 3PAR solution might not be absolutely as fast as a little flash only box but is still 500k IOPS+ so plenty for all but the most demanding of apps. But the 3PAR delivers that performance without sacrificing usability.

            1. Matthew Morris
              Meh

              The relationship...

              The relationship got off the ground - the challenge was the limitation of BCS server products only, DL980 or HPUX platforms. The HP folks were great to work with.

              As for Violin Latency (you get this from the website).

              PCIe direct offers the lowest-latency 100µs @70/30 mix up to 250K IOPS for the 3205/3210 that HP re-sells.

              FC operates at 200µs @70/30 mix up to 250K IOPS for the same gear.

              There will always be a latency overhead for any shared fabric (IB, iSCSI, FCoE, FC) that is not uncommon.

              But you can also look at the TPC-E and TPC-C benchmarks @ tpc.org where our storage was leveraged for HP and Cisco postings for SQL Server and Oracle use cases. Or you can look at the VMware VM-Mark scores for data there for HP, Cisco, Dell and Fujitsu. Or any other data on our benchmark link on our site.

              As for expectations IOPS is about capacity - what matters for VDI, Virtualization and Data intensive applications is latency. While other Solid State or Flashy solutions may deliver decent IOPS. That's nice. Deliver the same latency across the bored for read/write mixed IO.

            2. Anonymous Coward
              Anonymous Coward

              "if you want to share Violin arrays across a few servers then you have to introduce a gateway (x86) server which introduces more latency," .... "3PAR can support 200 SSDs without that issue."

              If you need 50-100 TB of crazy high performance and need it across many servers, you might need to go to a traditional SAN based array. IBM's RamSan 820 can support 20 TB after RAID of capacity, which isn't much if you are talking about storing boat loads of unstructured files, but is plenty for most of the world's DB/DWs which is where most people want the performance. Violin is probably similar, but I am not as familiar with their products. 3PAR, or any other traditional SAN array with SSD, solves a problem which doesn't exist for the vast majority. How to get extreme performance, relative to disk, across a large amount of capacity in a SAN array. Most people only need 500,000 IOPS with 10ms of latency or in that range for a single or a few DBs or DWs usually running of a single or a few servers. If we get to the point where SSD is the same price as HDD, then I am sure people will take the extra performance for the same or comparable cost to get their random files and other data served at blistering speeds but it is primarily about cost per TB outside of a few critical workloads, not performance.

              I think internal is where the performance critical workload data, and maybe all data, will be stored in the future. For the obvious reason that it is the best place to put high performance requirement data (next to the CPU), but also for the reason, as you mention, that people will no longer have a way of managing that data which is distributed across many servers... which sounds like a problem, but it is a problem that the app providers are only too happy to help people solve with additional software products from them as opposed to a storage provider. SAP, Oracle, VMware, MS are already selling products which look curiously like functions that used to be managed by SAN arrays.

  4. Anonymous Coward
    Anonymous Coward

    And all for the low, low price of . . .

    100 hundred BEEELION dollars.

  5. Captain Dan

    Performance numbers

    Interesting... just a while ago I was testing one of these 1.2TB ioDrive2 MLC cards in a 2P x86 box on Linux with fio... 8k 70/30 mix at 32 outstanding I/Os yielded around 90k combined IOPS out of the box, not 60 as in the figure. And that was with an older version of the Fusion-io driver from 2012. Telling from the past the performance always increased with every driver released. Not so much of a difference anymore to the alleged 120k IOPS from XtremeSF...

This topic is closed for new posts.