back to article Samsung, Micron bake 3D chips for next-gen RAM

We're hitting a memory wall, if you didn't know, and processor cores are going to be held up because DRAM can't scale up enough or ship 'em data fast enough. Samsung and Micron aim to fix that with 3D memory cubes and a consortium to define an interface spec for them. Samsung and Micron, asserting that existing 2D DRAM …

COMMENTS

This topic is closed for new posts.
  1. E 2
    Thumb Up

    A title is required

    I like it!

    1. Anonymous Coward
      Anonymous Coward

      I like it too

      ...but for future reference, a title is not required :)

  2. LPF
    Facepalm

    Wait for RAMBUS....

    to come along and sue..

  3. Stephen Booth
    Boffin

    Only a stop-gap

    There are two fundemental problems with conventional 2D DRAM.

    1) The number of available pins on the device is proportional to the chip perimiter so the available connections between memory and processor grows much slower than moores law.

    2) DRAM cells are built with specialised high desity processes so you can't add much in the way of additional logic on the same die (while keeping them cheap to manufacture). DRAM chips therefore connect via simple signaling where the energy cost is proportional to the capacitance of the wires and the frequency.

    By stacking chips and using TSVs you get round both problems. The number of TSVs that can be supported is proportional to the chip area and the connections have extremely low capacitance.

    I'm assuming the plan is to put efficient RAMBUS style high speed serial interfaces in the logic layer to connect to the CPU.

    This keeps memory and CPU as seperate devices and allows standard memory devices to be used with different types of CPU However though better than what we have at the moment high speed serial interfaces still take power.

    The right place for TSV stacked memory is right on top of the processor with external memory devices only used as a top-up for higher than normal memory configurations.

  4. Flocke Kroes Silver badge

    Well done ARM

    Last time I saw innovation from the DRAM industry, Intel ignored it and went for RDRAM because they got a sweet patent licensing deal that let them crush third party chipsets. Things must have changed radically enough that DRAM manufacturers are confident that their R&D spending will turn into a mainstream product. AMD and Via do an excellent job of keeping entry level x86 under $1000, but they do not really challenge Intel. If this product gets anywhere it will be because Intel worries 3DRAM+ARM will be cost effective.

    1. Relgoshan

      Er

      If it's not Mac, they probably paid less than 700 here in the US. High-end is a tiny fraction with lots of shelf space but very low volume.

  5. Bronek Kozicki
    Meh

    What "performance" ?

    throughput or latency, or some combination of both???

    Really want to know since it's latency which is the drag in most applications (which is why CPUs acquire ever larger caches), but so far it proved very difficult to reduce!

  6. Bela Lubkin

    Optional

    Why must they publish a new spec for this thing? Use 2^n layers (2^1 initially, I suppose) and just use some of the high address bits as the layer selector. Or some of the low address bits -- whichever arrangement performs better.

    Yes, there might be some extra performance to be eked out if the memory controller is more specifically aware of the new arrangement. So OK, bake in some new out-of-band signal a newfangled controller can use to access new info, but keep it within existing signaling so the same memory can be used on old systems.

  7. Relgoshan
    Thumb Up

    Eesh

    The tech seems promising but board interface would need an architectural shift to make use of that extra speed. Memory controllers are on the CPU, placing practical restraint on their size and power consumption. It's good they are building a consortium because the whole system would need to be rejigged. Anyone looking forward to PGA (or BGA) memory modules?

    On the other hand, RAM will need to get "smarter" as stated by my fellow commentard above. Just as the HDD and SSD have crazyass logic, and laptop batteries can be virused, new RAM will be working a lot of invisible black magic. Ideally this would involve mixed memory and logic within the same package, for less chip overhead when the PCB already needs more traces. And the overclocked stuff may be getting water jackets.

  8. Anonymous Coward
    Anonymous Coward

    I thought it wasn't any good if it wasn't thin though? Are Ultramegahyperbooks going to need a big boil in the middle?

  9. Boris the Cockroach Silver badge
    Happy

    and once

    this technology is perfected, the CPU bus will become the bottleneck again... and then the CPU/GPU and then back to the HDD etc etc etc until we all own quantum computers that fit inside a watch

  10. Disco-Legend-Zeke
    Pint

    Cool!

    That's the issue; how to cool the darn thing.

  11. Bela Lubkin

    .

    The article says "up to 70 per cent less energy per bit", so in theory it should be a bit easier to cool than current tech. At same density, anyway. And if their fantasy numbers come true...

  12. Anonymous Coward
    Anonymous Coward

    Memory test and assembly

    I'd be interested to know how they are going to be testing these beasts and what the initial yields will be. The papers that I've read about die stacking indicate that the stacking process has a significant impact on the actual parts - turning die that separately passed manufacturing test into defective junk in the stack. That could get expensive. Also, the test cost have got to be a big concern - they'd need to be testing the die and then the stack and finally the packaged part. Memory testers don't come cheap - although they could put a load of built in self test (BIST) in the logic layer and use low cost testers once the part is stacked and packaged.

    They must have all this sorted though.

This topic is closed for new posts.

Other stories you might like