back to article AMD touts LRDIMM memory for x86 servers

It looks like Advanced Micro Devices is first to market with support for load reduced DIMM DDR3 main memory for x86 and quite possibly all kinds of servers, and is trotting out Inphi, the maker of the isolation memory buffer chip that is at the heart of this technology. With LRDIMM memory, you take out the register on a DDR3 …

COMMENTS

This topic is closed for new posts.
  1. Voland's right hand Silver badge
    Devil

    All nice if you can find them

    Well, let's have a look. AMD new fusion APUs, 64GB per socket, motherboards support 64GB per socket (Asus F-series), so let's go find some memory... Nope. Nada. Nil. Zilch.

    All that talk about more memory is nice, but nobody is walking the walk.

    For the time being the BIG DIMM capacities are mostly in the mythical range. Once you go past 8GB per DIMM the options are scarce and extortionately priced (with or without extender in the equation). Once you go past 16G per DIMM you are definitely in la-la land.

    I'll be blunt - can I see those LRDIMMs please. Crucial catalogue number or at the very worst a vendor part number. Nope I cannot. So well, then let's move along then...

    1. Matt Bryant Silver badge
      Boffin

      RE: All nice if you can find them

      ".....a vendor part number....." If you are buying vendor servers then you won't be putting non-vendor memory parts in anyway, otherwise you usually invalidate your support/warranty. The server vendors make the memory makers jump through higher hoops to get through their suitability checks (the tier1 x64 vendors want five-nines parts, remember, or as close as they can get). And vendor part numbers are very unlikely to appear until the actual servers are launched by the vendors, which will probably (at the earliest) be when AMD trots out the market-ready CPUs. With both AMD and Intel talking them up, I'd say it's pretty much a certainty they'll be real parts soon.

  2. Anonymous Coward
    Anonymous Coward

    Another win for AMD and it's customers

    Looks good to me.

  3. netlistpost

    Re: LRDIMM will sell or would you buy HyperCloud ?

    So in summary we have for LRDIMMs:

    - LRDIMMs require a BIOS update for current servers - but Romley will probably ship with appropriate BIOS updates to support LRDIMMs

    - LRDIMMs are not interoperable with standard RDIMMs

    - LRDIMMs have "latency issues" - so much that their performance renders the 16GB LRDIMMs non-viable against the 16GB RDIMMs (2-rank using the newer 4Gbit DRAM dies) and HP/Samsung state at the IDF conference on LRDIMMs that they will focus on the 32GB LRDIMM since that still retains an advantage vs. the 32GB LRDIMMs (because these are 4-rank and a 2-rank one won't be available until a 8Gbit DRAM die is available in a few years).

    - Inphi documents show them achieving a max speed of 1066MHz when running 768GB in a 2-socket server (see references below).

    And for HyperCloud:

    - HyperCloud is plug and play and requires no BIOS update (this is possibly related to Netlist IP in "Mode C" operation where the BIOS is fooled into thinking a lower "virtual" rank module is being used)

    - HyperCloud is interoperable with standard RDIMMs

    - NLST HyperCloud have similar latency as RDIMMs (a huge advantage) and have a "4 clock latency improvement" over the LRDIMM (see references below).

    - HyperCloud runs 768GB @ 1333MHz in a 2-socket server (i.e. 3 DPC @ 1333MHz) (see references below).

  4. netlistpost

    LRDIMM will sell or would you buy HyperCloud ?

    Inphi is the only provider of LRDIMM chipsets - IDTI is out (Jan 30, 2012 conference call suggest they may introduce something for Ivy Bridge - so nothing for Romley from IDTI) - and Texas Instruments is out (probably thanks to their settlement with Netlist in Netlist vs. Texas Instruments).

    So Inphi will dominate LRDIMM.

    The problem is HP/Samsung are saying they will not push the 16GB LRDIMMs (watch the IDF conference on LRDIMMs video on Inphi's LRDIMM blog main webpage). That's because of the latency issues with the LRDIMM design (a 628-pin huge buffer chip with long line lengths - compare to Netlist's distributed buffer IP). The 16GB LRDIMM cannot outperform the newer 16GB RDIMMs based on 4Gbit DRAM dies (watch the question answer session in the above video).

    HP/Samsung are saying they will not push the 16GB LRDIMMs, and only push the 32GB LRDIMM.

    HP has removed references to LRDIMMs from it's website - note the invalid links on a google search (but then HP has recently done an "exclusive" deal with Netlist).

    This reduces the market considerably for LRDIMMs (esp. for the "let's use lots of low density and cheaper memory" crowd).

    Since the LRDIMM market is estimated to be 20% of the Romley server market (since it's use only becomes needed for high memory loading situations) - the question is if it will be LRDIMMs which take this space, or will it be something as simple as Netlist's HyperCloud memory which has none of the LRDIMM issues:

  5. netlistpost

    Listening to the IDTI Jan 30, 2012 conference one can see them saying the same thing - that 16GB LRDIMMs are not going to cut it (latency issues c.f. 16GB LRDIMM using 4Gbit DRAM die make the LRDIMM non-competitive), and only the 32GB LRDIMMs will be sold - which will leave only 2%-3% of the market for the 32GB LRDIMM.

    In contrast the full market for LRDIMMs was supposed to be 20% of Romley - according to analysts.

    This leaves the 16GB LRDIMM segment to Netlist - however why buy even the 32GB LRDIMM using the Inphi buffer chipset if the Netlist one is superior ?

    One thing not mentioned by IDTI is that if LRDIMM is going to fail for Romley and that market goes to Netlist, why would anyone want to buy the LRDIMMs later when IDTI plans to release LRDIMMs for Ivy Bridge in 2013.

    http://www.seobythesea.com/2009/11/google-to-upgrade-its-memory-assigned-startup-metarams-memory-chip-patents/#comment-422085

    netlist

    02/01/2012 at 11:59 am

    quote:

    ----

    at the 41:45 minute mark ..

    For the second part of your question with respect to LRDIMM.

    We have been very consistent in our .. uh .. discussion of the size of the LRDIMM market.

    We believe .. that in the Sandy Bridge .. uh .. generation of Romley .. uh .. that the attach rate for LRDIMM will be small.

    It will be probably 2 or 3 percent (2%-3%) of all of those Romley .. of all of those servers.

    Now, remember Intel's got this (hype ?) talk strategy .. uh .. so the tech is the Sandy Bridge and then there is a die-shrink which is the talk .. which is Ivy Bridge.

    Now, Ivy Bridge is 1600MHz, whereas Sandy Bridge is only 1333MHz.

    continued ..

  6. netlistpost

    continued ..

    Ivy Bridge also allows for 3 DIMMs per channel (3 DPC), whereas Sandy Bridge only allows for 2 DIMMs per channel (2 DPC) (NOTE: probably mean at full speed).

    at the 42:40 minute mark ..

    So if you go through the analysis .. which I am not going to bore you with here .. and you look at the benefits of LRDIMM in Sandy Bridge, the cost-performance tradeoff is not .. uh .. not very favorable.

    It turns out - now just give you the answer .. uh .. that you can build a DIMM using .. uh .. uh .. 64 .. I'm sorry 4Gbit DRAM and standard Registered DIMM (RDIMM) that has .. really a lower cost and roughly equal performance to what you would get with LRDIMM - that's why the attach rates for LRDIMM in Sandy Bridge is relatively small.

    The the only place where LRDIMM will give you a performance tradeoff in the Sandy Bridge generation is in the 32GB DIMMs, not in the 16GB DIMMs.

    So the 32GB DIMMs are only about a 2-3% of the total market.

    at the 43:45 minute mark ..

    That that .. that's the explanation for why that attach rate is small.

    Now go to Ivy Bridge where you've got 1600MHz (and) 3 DIMMs per channel (3 DPC) - go through the same analysis - it is MUCH more favorable for LRDIMM.

    And so we anticipate that in the Ivy Bridge generation, the attach rate will be 15%-20%.

    ----

  7. netlistpost

    Inphi comments at Stifel Nicolaus conference

    Inphi comments at Stifel Nicolaus conference on Feb 7, 2012 can be found at:

    http://www.seobythesea.com/2009/11/google-to-upgrade-its-memory-assigned-startup-metarams-memory-chip-patents/#comment-422747

    netlist

    02/11/2012 at 11:00 pm

    And a commentary on Inphi's comments:

    http://www.seobythesea.com/2009/11/google-to-upgrade-its-memory-assigned-startup-metarams-memory-chip-patents/#comment-422751

    netlist

    02/12/2012 at 12:51 am

    Inphi is suggesting LRDIMMs may pick up in second half of the year - for now they are just expecting RDIMM buffer chipset sales from Romley pent-up demand, and expecting CIOs to become interested in LRDIMMs in the second half of the year.

    But what are they going to get interested in - if the 16GB LRDIMM is not being pushed by HP/Samsung, and the 32GB LRDIMM cannot do 3 DPC at 1333MHz (plus has the "high latency" issues compared to Netlist HyperCloud).

  8. netlistpost

    Inphi comments at Stifel Nicolaus conference 2

    However, Inphi does convey an uncertainty about the demand for LRDIMM - perhaps correctly suggesting that OEMs have little interest in pushing memory that would obviate need to buy more servers (if adding memory will allow to create more virtual machines on same servers etc.):

    quote:

    ----

    at the 16:10 minute mark ..

    Tore Svanberg - Stifel Nicolaus (Analyst):

    And, you know, you showed a chart, I think it was maybe an IVC (?) chart looking at you know how 32GB and then eventually 64GB will ramp.

    Umm .. but when you talk to your biggest customers, let's say, you know Micron and Samsung, I mean how how are they looking at that type of ramp.

    Uh .. is it very similar .. uh .. and you know what types of penetrations are they talking about both this year and next year ?

    John Edmunds - CFO:

    Umm .. so .. it's difficult in the supply chain to get a lot of .. uh .. uh .. forecast.

    Uh .. because essentially 32GB it's a new product altogether.

    So in in general people don't know how demand there will be for that.

    So we actually think it'll become .. uh .. uh .. it'll become demand .. it will be pull driven in effect, because you will have CIOs who are saying I'm going to order a batch of Romley systems, but I want them to be the LRDIMM configured, because I've tried that .. I can see that really benefits my application.

    Uh .. and .. because .. for two reasons, the new product and because .. uh .. the the system and memory guys don't know how many customers (are) out there - they are going to call for that.

    They are conservative right now on what their thinking will will actually be the demand or what will be the shipment for ..

    at the 17:25 minute mark ..

    As I showed you earlier on that one example, there is also less hardware to ship if you are shipping that kind of configuration.

    So I'm not sure anybody's out there banging the bush sayin' "hey do this and buy less hardware" right ? It's a little bit of an anomaly in that sense.

    ----

  9. netlistpost

    Inphi comments at Stifel Nicolaus conference 3

    Inphi pushing out LRDIMMs to "Ivy Bridge" - perhaps because at 32GB and 64GB they have a better chance to compete with RDIMMs (i.e. the "high latency" issues become less important if RDIMMs are going to be 4-rank for 32GB etc.) - even though the 32GB LRDIMMs cannot do 3 DPC at 1333MHz. However all that discussion assumes there is no third party Netlist IP providing memory modules that outdo LRDIMMs, and also have none of the "high latency" issues of LRDIMMs.

    quote:

    ----

    at the 17:35 minute mark ..

    But I think once the .. uh .. once the Romley systems get out and people are able to verify that you can HAVE 50% more virtual machines or you could have, you know, better throughput, why wouldn't they go with the LRDIMM.

    They are going to run in that direction.

    And .. uh .. we think that's all good .. uh .. we think as Ivy Bridge comes in you'll see more .. uh .. applications of LRDIMM ..

    ----

    Here Inphi suggests that DDR4 with their decentralized architecture is "same architecture as LRDIMM" centralized buffer (628-pin buffer chip) - Netlist has been saying for some time that DDR4 intersects NLST IP - and articles by same author is also suggestive of that: "Netlist puffs HyperCloud DDR3 memory to 32GB - DDR4 spec copies homework"):

    quote:

    ----

    The good news is when we get to DDR4 people are more interested in gravitating towards the LRDIMM .. uh .. architecture, where people can choose to buy a separate register chip and as many buffers that the would like.

    And .. uh .. it allows for a more ubiquitous .. uh .. implementation of of the same .. uh .. architecture as LRDIMM - you just do away with 2 independent products and once product can scale into .. into what anybody might need.

    ----

This topic is closed for new posts.

Other stories you might like