back to article EMC's DSSD rack flashers snub Fibre Channel for ... PCIe

The rack-scale flash array technology EMC gained in its DSSD gobble connects to a server using PCIe. That's according to an interview by Barrons with EMC’s product ops head Jeremy Burton. A DSSD flash vault can fill a rack and hooks into a server using a PCIe connection – actually it need many, many lanes to have a rack o’ …

COMMENTS

This topic is closed for new posts.
  1. Lusty

    so

    a bit like a bigger Violin array then?

  2. Nigel Campbell

    On the back of an envelope

    I think you might just have invented infiniband. It's switchable and you could certainly implement a key-value store on top of RDMA or a channel.

    1. Anonymous Coward
      Anonymous Coward

      Re: On the back of an envelope

      My first thought as well. An over the flash server-to-server data toss probably will net you a faster communications channel than "normal" mechanisms.

    2. Lusty

      Re: On the back of an envelope

      "I think you might just have invented infiniband"

      Current Infiniband is slower than current PCIe so there is a difference, albeit one that most people won't care about. This type of storage is not aimed at most people though :)

  3. storagevulture

    Isn't "real-time historical" an oxymoron?

    1. virtualgeek

      Disclosure - EMCer here:

      No, it isn't. real-time refers to "very fast analytics" and "historical" refers to "very large datasets accumulated over time".

      Put it together, and DSSD has as one of it's targets applications that need to do extremely fast analytics over a very large dataset (much larger than you can fit on locally attached PCIe).

  4. Hapkido

    "A networked all-flash array, using Fibre Channel, will have a read latency of about a millisecond, roughly 17 times longer."

    Well that's a misleading statement. Yes for most storage controllers I would agree.

    Let's take 5 µs latency per km, then add say 2 µs latency for the FC SAN switch. The rest is in controller and the 'software features'.

    Compare the IBM RAMSAN devices (without many software features) to most others that do. That is where most of the difference is. At a guess 25% H/W controller and 75%, you can argue differently. Now, eventually, these 'storage features' will need to be included with PCI based systems (when integrated with shared storage).

    1. ByteMe

      I have to agree with Hapkido. Very misleading statement. The inherent read latency of MLC NAND flash itself is around 100us (published numbers vary). With an all-flash-array you can easily achieve sub-millisecond response times at the host layer using 8 Gbps FC.

      Scott Dietzen said it best. Moving flash closer to the compute is BS because NAND flash itself is the largest bottleneck, not the interconnect. The last thing storage teams need right now is yet another storage interconnect technology to support and troubleshoot.

      1. diodesign (Written by Reg staff) Silver badge

        Re: ByteMe, Hapkido

        "With an all-flash-array you can easily achieve sub-millisecond response times at the host layer using 8 Gbps FC."

        Well, FWIW, it's what Burton claimed. I've tweaked the article.

        C.

      2. Anonymous Coward
        Anonymous Coward

        Disclosure - EMCer here.

        What the comments re: latency of FC vs. PCIe miss is parallelism (and system level latency envelope is a combination of latency and parallelism).

        FC is quite serialized between initiator and target (as you would expect it to be, having been designed as a extension of SCSI direct connectivity).

        This is A-OK for IOps bands that are in "normal" array bands even in good AFA bands (think (100-1M IOps, latencies below 1ms).

        PCIe is designed fundamentally parallel, with the ability to issue many, many, many outstanding IOs.

        If you had a target which aimed for doing, let's say 10x+ more IOps than even the best AFAs, and doing it with latencies that were, let's say 10x+ lower, and wanted a system bandwidth that was lets say 10x+ higher - you would of course need a lot more parallelism inside the target than you see, even from the best AFAs (XtremIO as an example)....

        And remember - this is not designed to compete with AFAs per se, but target entirely new workloads which eschew LUNs and filesystems as persistence targets.

        ... add it all up, and well, the "FC attached flash" is low latency enough argument doesn't hold up (technically).

        1. skagenator

          Reply to EMCer and PCI

          Disclaimer - Brocadian here:

          Native FC is somewhat serialized with SCSI but that's SCSI not FC so let's make sure that we discuss the right things and not make mis-leading statements. Furthermore many FC target controllers including EMC's are not serial as such i.e. they have many queues etc. Now NVMe is better for flash than SCSI no doubt there, but running NVMe on FC for transport is something that the industry is working towards. FC can easily push 2M 4K IOPS with SCSI and the latency is much lower than 1ms. In addition you can get to ~70 us latency with flash on FC, I'm of course assuming an all flash device (no hybrid arrays). Making the argument that PCI-e is better than FC is comparing apples and oranges, it will largely depend on the usecase and the how the driver stack works on either technology.

          But we are missing the biggest question here, extending PCI-e out of the server is not an easy task, PCI-e is very timing sensitive so a few meters can be done with a special adapter etc but a 1000 physical node network is much harder. Don't think you can build a PCI-E SAN the way you build FC SANs.

      3. Lusty

        "With an all-flash-array you can easily achieve sub-millisecond response times"

        Try not to use the word "sub-millisecond" as this is still potentially orders of magnitude slower than microsecond. Sub-millisecond was brought in to marketing to combat Violin who were claiming low microsecond latencies. Sub-millisecond includes 999 microsecond latency, which is 10 times slower than 100 microsecond latency and 100 times slower than 10 microsecond latency. What many people forget while thinking these times are so low it doesn't matter, is that one clock cycle of a modern Xeon is very short indeed - 4 billionths of a second in fact. This means that is your storage could operate at 1 microsecond latency, the CPU still had to wait 4000 cycles for the information to arrive. If you look at a worst case marketing "sub millisecond" latency of 999ms then the CPU will be waiting around 4000000 cycles.

        To the average Joe with an average estate and average workloads, this doesn't matter. Most of the time VMware will fill in the blank cycles with other work anyway. For those that need the performance though, this is all critical stuff even if EMC do appear to have photocopied someone else's technology with the anti-patent filter engaged.

  5. dekmannj

    Linux Controllers don't add any latency?

    I find it difficult to believe they can reliably achieve that low latency if they have to run each I/O through a Linux controller layer, which presumably isn't all ASIC. Chris, where is the skepticism you demonstrate with so many other technologies?? Given EMC's history of developing via acquisition rather than true internal R&D (as a rule, there are exceptions), this just sounds like another puff of vapor. Any dates on shipment? Any customers alpha testing?

    1. This post has been deleted by its author

    2. virtualgeek

      Re: Linux Controllers don't add any latency?

      ... Disclosure, EMCer here - while Chris does his usual work of ferreting out good deets, there are some errors in here (which is fine), and one of the errors is the data path for IOs.

      Post acquisition, we disclosed that this was an early-stage startup, similar to when we acquired XtremIO (small number of customers, but pre-GA product). Just like with XtremIO, it was an extremely compelling technology - ahead of where we saw the market (and our organic work down similar paths - there was an organic project similar to DSSD codenamed "Project Thunder" - google it).

      Re: organic (internal) vs. in-organic (acquisition/venture funding), it's almost an exact 50/50% split.

      My own opinion (surely biased), thank goodness EMC does a lot on BOTH sides of the equation.

      Time has shown again and again that without healthy internal innovation (C4, ViPR control/data services, MCx, Isilon over the last 2 years) **AND** inorganic innovation (VPLEX, DSSD, etc) - all high-tech companies ultimately struggle.

      My opinion? Thinking anyone can out-innovate all the startups, all the people in schools, the entire venture ecosystem is arrogant. This is why it's such a head scratcher to me when people say its a "bad thing" to acquire - frankly customers like that we have good products and that they know we will continue to bring good ones to market (both organically and inorganically). IMO, It's a smarter move to play in the whole innovation ecosystem in parallel to organic internal-only activity.

      1. Lusty

        Re: Linux Controllers don't add any latency?

        "This is why it's such a head scratcher to me when people say its a "bad thing" to acquire"

        It's not a bad thing to buy good tech - it's what happens next that counts. The reason tech acquisition is often frowned upon is because tech companies often just change the logo and call it their own rather than taking that technology and merging it into their own over time. One only has to look at the HP stoage line up to see this in action with completely different technology in every product and no attempt to cross pollinate. If the LeftHand network RAID is so great, how come 3Par hasn't added it in? If ASICs are so great, how come LeftHand still uses a Xeon? All they have done is paint the LeftHand yellow, and even then one of the fascias is upside down!

        If acquisition leads to conflicting marketing material through your range then yes, it's a bad thing. Dell are surprisingly good with this, they are slowly but surely merging their various acquisitions into all of their products. Microsoft also are usually good with this sort of thing with some notable exceptions.

This topic is closed for new posts.