back to article Upstart chip chef Diablo's DIMM yum dumplings cram NAND in RAM

Canadian storage startup Diablo has pulled out what it hopes will be a PCIe-flash killer: its new tech provides fast flash data access by interfacing a NAND partner's solid-state drives to a host server's memory bus. Diablo, founded in 2003, has invented what it calls Memory Channel Storage (MCS) in which NAND flash is …

COMMENTS

This topic is closed for new posts.
  1. Bronek Kozicki

    I dont quite understand ....

    so, it is basically block device attached to memory channel. Meaning, probably simplest use would be to make it a big swap partition, assuming the driver is available. But, how is that better than ordinary SSD, when the latter is not used as a swap???

  2. jcrb
    Boffin

    What a load of........

    Seriously I wish companies would at least make claims that were even remotely realistic. The reason some flash over PCIe product has a read latency of 100us is.... well.... because... you know.... it takes 99us to perform the actual read from flash, going across that PCIe link adds a whopping 1us of extra latency. This TeraDIMM thing could be attached to an infinitely fast memory bus, it would only reduce the latency from 100us to 99us.

    It can't be that the whole of the flash space is just memory mapped, the TLBs in almost all servers aren't designed for that kind of physical address space, and the DRAM controller will expect responses to all requests on fixed timing, there is no way for a DIMM to go "excuse me can I get back to you, that address you asked for is not in my cache".

    And then there is the added power consumption on the DIMM slots that the servers were never designed for. And there is no protocol to tell a DIMM that the power is going out, or to prevent a bit of bad software scribbling all over your "storage".

    A graph of latency under mixed read write load, to a wider range of addresses than the 2 or 3 cached addressed probably used to make that latency graph might also be informative.

    At the end of the story they admit the flash has to be accessed through a driver stack or as a swap device through the OS so again the latency is going to no better than other PCIe flash devices. In all likelihood the real performance will be worse than other PCIe card flash storage since the power and physical space constraints are going to limit the kind of processing the flash controller can perform to be a lot less than what a controller sitting in a 20W PCIe slot can do.

    I know the idea seems cool, but really these TeraDIMMs are TeraDUMM.

    1. Anonymous Coward
      Anonymous Coward

      Re: What a load of........

      JCRB,

      Part of the fun of commenting is to take a small amount of information and use guess work and deduction to try to infer more about the product. People reading posts should always be asking themselves is there an agenda. For example, there is a blogger with the same handle working for a competing company (http://www.violin-memory.com/blog/tag/iops/).

      I believe you’ve made some errors in your analysis of the article.

      Any server that supports N DIMMs already has the power and cooling budget to support N DIMMs. Other articles have stated that MCS solution fits the power budget of a DIMM. Therefore, a server would have no issues with using the technology.

      There is a false comparison to suggest that somehow MCS suffers due to a driver. Clearly all I/O devices require driver stacks whether they are PCIe, SATA, SAS or MCS based. All I/O devices communicate to the CPU through some hardware infrastructure. The important question is what is the final latency of the solution and whether that latency is consistent or subject to high variability. The graph shown in the article clearly shows that the MCS solution had a low latency with very little variation while the PCIe latency was significantly larger and non-deterministic. There must be an explanation for the clear performance advantage of MCS.

This topic is closed for new posts.