back to article Micron glues DDR4 RAM to flash, animates the 256GB franken-DIMM

Micron is developing a DDR4-compatible hybrid DRAM-NAND stick to blast data at processors faster than the PCIe bus used by rival flash cache products. DDR4 is a JEDEC standard and can move at least 2.1 billion blocks per second - the block size is set by the memory chip's word length in bits - and it beats the pants off DDR3. …

COMMENTS

This topic is closed for new posts.
  1. Piro Silver badge

    Poor idea..

    Flash is much better implemented attached to spinny disks, or as a buffer for spinny disks on their own.

    Attaching it to RAM simply gives that RAM a lifetime. NAND doesn't last forever. You may not want to replace the RAM.

    1. Anonymous Coward
      Anonymous Coward

      Re: Poor idea..

      Sure, but I think you can saturate most if not all disk interfaces with data from Flash so this seems a damn good idea, you're going to have to replace the flash anyway eventually and the cost of the RAM will be an insignificant part of the price.

      My only concern is data security, you now have to shred DIMMs as well as SSDs and spinning rust.

      1. Piro Silver badge

        Re: Poor idea..

        You're thinking we're going to stick with existing disk interfaces - there are already insanely fast SSDs available on PCIe cards, and no doubt we will have much faster versions of SATA or a replacement such as SATA Express.

    2. Paul Shirley

      Re: Attaching it to RAM

      You've got it back to front. This is cache RAM attached to Flash. That always makes sense and just like the cache RAM on your spinning disks it's a disposable, small part of the package.

      The need for OS support raises some questions. Could be as simple as recognising the device, more likely it's recognising that you don't want a PC using this as RAM but treating it as a SSD on the DDR4 bus. Or both ;)

  2. Tom 7

    I could probably compile support into Linux in about 30 seconds

    take a bit longer to get hold of the hardware for testing.

  3. Scott Pedigo

    Maybe I'll finally be able to get the build time down to under an hour? Naahhhh....

  4. This post has been deleted by its author

  5. Anonymous Coward
    Anonymous Coward

    Some might say "Soon to be copied on a Samsung DIMM near you?" ;))

  6. John Sanders
    Linux

    They offered it to MS who said "nah..."

    Write a patch for Linux you lazy micron !@£$%^&*.

  7. James 51

    This could be part of the ultrabook spec or micro PCs. Big problem when you decide you want more stoarge space or RAM though.

  8. TeeCee Gold badge

    Picture caption fixage.

    "What was a Samsung DDR4 DIMM before some eejit put his thumb on the contacts and cacked it."

    1. diodesign (Written by Reg staff) Silver badge

      Re: Picture caption fixage.

      You've given me a brainwave. Reload :-)

      C.

  9. Bronek Kozicki
    Childcatcher

    I foresee 5th class of data storage

    Nonvolatile, standing next to old ones: stack, heap, static and thread local.

    Icon to match users of this new memory.

  10. joeW
    Facepalm

    First image -

    Holding a DIMM by it's connectors?

    That's a paddlin'.

  11. Blergh
    Go

    Instant on computer

    Could this lead to an instant on computer? Kinda like Hibernate but simpler and better.

    Apart from the more obvious less trouble with big chunks of data, it's what I was thinking of when I read it.

    1. Bronek Kozicki

      Re: Instant on computer

      Yes, that's one possible application. With "instant on" you can also have "instant off", i.e. computer which goes to sleep in a millisecond and wakes up in a millisecond too, meaning it can actually go to sleep even when you use it, when monitor and GPU continue work. Even more interesting is memory going to sleep when your programs do not happen to use particular module, without the CPU noticing.

    2. Bob H
      Go

      Re: Instant on computer

      The bigger question is when do we get the a position where we no longer make the distinction of working memory vs storage. Some micro-controllers could be heading this way with FRAM replacing Flash. Currently a computer loads code and data into RAM to be worked on, but with a hybrid model you can do away with such distinctions before the OS.

      1. Bronek Kozicki

        Re: Instant on computer

        Yes, that is definitely the direction this is going to, that's why I mentioned fifth storage category above. It requires support in programming languages though. It is actually not that difficult to add useful stuff to C++ , you just have to convince a bunch of people that this is both doable and useful. New nonvolatile memory category would most likely qualify, if solutions such as phase change RAM or MRAM hit market in sufficient numbers and capacities, and sane programming model is proposed.

      2. M Gale

        Re: Instant on computer

        I hope the distinction between RAM and storage never goes away.

        Might sound like heresy in a forum full of coders and hackers, but think about it: Right now you can solve 99% of OS hiccups by turning the machine off and on again.

  12. Sandpit

    "Kinda like Hibernate but simpler and better"

    You mean like sleep? I always sleep my PCs now, they come on in <3 seconds usable. Not quite instant but I haven't worried about boot times for quite a long time now. I have no idea how long my PC takes to boot, I only do it when there is an update once a month and I that happens when I am asleep.

    1. Bob H

      The biggest problem that the OS has is letting everything know that the time has miraculously changed and ensuring the hardware is in a sane state.

  13. Anonymous Coward
    Anonymous Coward

    Motherboard impact?

    Does this require changes or special support from the motherboard (beyond DDR4)? Or will any DDR4 capable motherboard be able to access this without special hardware?

    1. Nexox Enigma

      Re: Motherboard impact?

      It's a good bet that BIOS will have to know about this sort of hardware and treat it specially, like Micron's NVDIMMs (similar to this, but only enough flash to back up the RAM on power loss, with the aid of a battery or capacitor.) Then there'll be OS kernel support, and, finally, some sort of software support required before these will actually be usable and useful.

  14. bed

    Thanks for the memories

    This makes sense. A) The OS will need to evolve doing away with the separation of storage and memory a stepping stone being the RAM disk, which will bring back memories (pun intended) for a few. B) memory technologies are evolving; memristor for example. C) A processor with 64-bit memory addresses can directly access 264 bytes(=16 exbibytes) of byte-addressable memory (Wikipedia) – more than enough for the next couple of weeks.

    1. bed

      Re: Thanks for the memories: 264 byte

      264 byte should, of course, be 2^^64 (two to the power of sixty four) bytes. Such are the perils of copy and paste.

      1. Anonymous Coward
        Anonymous Coward

        Re: Thanks for the memories: 264 byte

        "should, of course, be 2^^64 "

        Erh, I think you mean 2^64

        1. Gavin King
          Happy

          Re: Thanks for the memories: 264 byte

          "Erh, I think you mean 2^64"

          The extra one's to make up for the missing one: now there are on average two circumflexes where there ought to be two circumflexes.

    2. Luke McCarthy

      Re: Thanks for the memories

      Current Intel processors can only address 32GB due to an external address bus much smaller than 64-bit. Expanding this would mean more pins and a new socket. It is the future though to remove the concept of volatile storage, especially when post-NAND technology is commercialised. Filesystems will become obsolete and databases would be much simpler to develop.

      1. Bronek Kozicki

        Re: Thanks for the memories

        New chip design would be needed anyway, because some of the nonvolatile technologies can potentially have latency in the low tens of nanoseconds, i.e. on par with 2nd or 3rd level cache. Meaning you only need 1 level cache, thus freeing lots of space on the chip. You could use it for more cores, larger level 1 cache, dunno what else. Very exciting, although we are still far from it.

        1. Frumious Bandersnatch

          Re: Thanks for the memories

          New chip design would be needed anyway

          I see lots of interesting comments here, your own being particularly interesting. So anyway, this is a response to quite a few of those posts...

          I think that if we're going to see more of this sort of thing (storage that blurs the boundaries between RAM, flash and disk storage as well as the ability to completely power off components when not in use) then we're going to need a fundamentally different architecture to take advantage of it. This goes beyond just new chip design (where even today cores can be started up and shut down at a whim) and into having some sort of "power arbitration" bus, with the entire system backed up with a small, finite battery. For the instant-on/instant-off scenarios using flash as hibernate/sleep storage, you need to be able to guarantee that it's going to be able to finish writing the OS state data in case of loss of mains power. For the scenario of being able to, eg, keep power routed to the GPU while it's doing some computation task, but shutting down other non-essential stuff (but probably keeping, say, Ethernet alive to enable a kind of wake-on-lan feature) you probably want to be able to budget how much you can do while on internal battery power and also have the ability to suspend gracefully when you're approaching its limit. Not trivial stuff at all.

          Of course, it's very unusual these days for us to have battery power built onto the motherboard (as opposed to being in an external UPS). If these devices/ideas become commonplace, though, we're sure to see many innovations in power management overall. I shudder to think of all the new failure cases when we stick in a new device (be it faulty or malicious) in machines in future, though...

  15. Anonymous Coward
    Anonymous Coward

    Maybe?

    DDR4 has some value for servers and portable devices that new LV requirements, but for conventional desktop PCs, it's of no value at all and will be expensive. With DDR3 you can now have LV and add RAM as desired. With DDR4 you install all of the RAM from the get go or replace all of it if you desire to increase the density.

    As far as frequency is concerned, there is no tangible benefit to DDR3 RAM frequency above 1600 MHz. as this is not a system bottleneck for typical work station or personal desktop PCs. Most X86 server CPUs won't run RAM above 1600 MHz. currently, so DDR4 and faster frequency hybrid DDR3+ doesn't offer any value for server applications either. As with PCIe 3.0, DDR4 is a technology that we may need down the road and that is why it's defined in advance so that the industry can evolve into it as needed in the future.

    1. Alan Brown Silver badge

      Re: Maybe?

      "With DDR4 you install all of the RAM from the get go or replace all of it if you desire to increase the density."

      I'd wager that 99.99% of all desktop PCs/laptops never have ram tweaks during their operational lifespan.

      Half the time it's cheaper to replace an entire motherboard than pay for exra ram on an aging system.

    2. Ammaross Danan
      FAIL

      Re: Maybe?

      "there is no tangible benefit to DDR3 RAM frequency above 1600 MHz. as this is not a system bottleneck for typical work station or personal desktop PCs."

      Actually, AMD APUs have significant graphic-subsystem gains with DDR3-2166 (or any range stepping up from the horrid DDR3-1066 that is usually shipped with cheapo PCs). Intel integrated GPUs don't benefit much, but their GFX performance is horrid (comparatively) anyway.

      "so DDR4 and faster frequency hybrid DDR3+ doesn't offer any value for server applications either."

      Do note that with increased frequency, your memory throughput increases. Just because current programs don't make significant use of 22Gbps throughput over 14Gbps (most machines only have 4-6GB of RAM total anyway), doesn't mean that NO program could be engineered to do so, especially with knowing there's 256GB of NAND storage hiding in a DDR4 slot (hence the OS support requirement). THAT location is where I, as a programmer, would dump my table cache that couldn't fit into actual volatile RAM, as it's guaranteed to have better throughput and access/storage speed than a spindle drive. Windows could utilize it by copying the whole OS there too. A game could make use of it by stuffing map packs, texture files, etc in there rather than leaving them on a spindle drive. Clustered systems could make significant use as well. We'll have to see. However, no one will design for it if they don't have hardware to test on, nor likelihood of adoption.

  16. Luke McCarthy

    How does it work?

    Does it present the whole flash memory as if it were RAM, with the RAM caching reads/writes?

  17. jai

    >Holding big databases in memory

    But surely the appeal of having a big database in memory is that you'd get memory speeds.

    From the way I read the article, this would allow you to load the big database into memory, but you'd still be caching from the slower flash chips and so you'd still have performance not much better than holding the big database on a flash hdd and caching through your standard memory, no?

    1. Anonymous Coward
      Anonymous Coward

      Re: >Holding big databases in memory

      There will be a use in some cases for this hybrid DIMM, but as you note, it might not be for large database servers. Most of those already live on larger Unix or mainframe systems. At least with the Oracle DB, you can point to flash (normally placed on the PCI-E bus) to use for extra caching. And now in the case of the Oracle DB, Oracle is pushing Exadata to it's larger customers. With their offloading of queries to the storage units, less memory and CPU resources are required on the database nodes.

      So IBM and Oracle may take advantage of it, but I don't know if it will catch on for primary memory in the near term. Both vendors are always coming up with better tricks to improve performance. The hybrid DIMM seems to be more of a workaround and something more geared towards lots of little Winders VMs where performance doesn't matter. The author of the article even notes a preference from Micron of wanting Windows drivers/support which supports this theory.

    2. Steve Knox

      Re: >Holding big databases in memory

      1. You'd be caching the most-used data, so a big database with some super-important bits and most only somewhat important bits would work much better. This wouldn't work as well for big databases with an evenly-distributed data usage pattern, though. This too is dependent upon the quality of the cache algorithm.

      2. As mentioned above, flash already saturates most HDD interfaces, so giving it a faster interface will improve performance. This is (again mentioned above) already being used to justify PCIe flash storage. So even with a database to big to fit entirely in the DRAM cache, there will be a speed improvement as the flash can serve the data faster than via HDD (possibly even PCIe, with a suitable layout and controller) interfaces.

      1. Frumious Bandersnatch

        Re: >Holding big databases in memory

        You'd be caching the most-used data,

        Alternatively/additionally, you'd probably find it useful to hold indexes in RAM, and implement some sort of ageing/caching algorithm that keeps new and frequently-used data in flash and the rest out on spinning disks. If you use a log-based structure for the flash storage and periodically rewrite out to disk (perhaps redundantly, depending on whether new indexing constraints are required) then you can optimise both reads and writes across all storage layers. Something like SILT or log-structured merge trees, but with spinning disks as the final storage layer, optimised to reduce fragmentation and extra seeks.

  18. cyberdemon Silver badge
    Alert

    Hmmm.

    Massive DRAM cache for an SSD?

    I hope you have a very good UPS, is all I can say..

    1. P. Lee

      Re: Hmmm.

      > I hope you have a very good UPS, is all I can say..

      I assume its fast enough to write data to the non-volatile part before the power dies away completely.

      1. Frumious Bandersnatch

        Re: Hmmm.

        I assume its fast enough to write data to the non-volatile part before the power dies away completely.

        That's not a good assumption. Power failure when writing to SSDs can trash even bits of data that weren't currently being written to thanks to the possibility of wear-levelling algorithms effectively moving random blocks around whenever you make a write. See "write amplification" on wikipedia for a pretty good description.

    2. Ammaross Danan
      FAIL

      Re: Hmmm.

      "OS Support" would imply exposing to the programmer which is volatile vs non-volatile for the programmer to decide which one to use for which task. Database servers don't eat themselves in the event of a power loss event and can resume semi-gracefully now, and we don't even have non-volatile RAM for them yet. Why would you assume we'd be worse off than we are now?

This topic is closed for new posts.

Other stories you might like