This topic was created by Chris Mellor 1 .
The SMART flash DIMM announcement opened up a major server memory redesign period. The idea of packing NAND chips tightly together and accessing them in the same address space as main memory is highly attractive to server manufacturers looking for an edge in running applications faster, faster than PCIe flash for example.
SanDisk has bought SMART and now has a DIMM future (sorry). My understanding is that all the major server suppliers are looking at non-volatile memory DIMMs and designing future servers with storage memory, and not just with NAND but envisaging post-NAND technologies such as Phase Change Memory (PCM), Spin Transfer Torque (STT) RAM or some flavour of Resistive RAM (ReRAM) technology.
This technology transition will make storage memory byte- instead of block-addressable; the programming model would change. There would need to be a software layer, like Memcached, to present storage memory as pseudo-RAM to applications
We could think of X86-populated motherboards populated with storage memory DIMMs.
Cisco’s UCS servers are known for having large amounts of RAM. Building on its Whiptail all-flash array acquisition it would not be surprising if Cisco were to announce storage memory-using servers in 2014. We’re surely going to see Whiptail arrays using UCS servers instead of the Supermicro mills they currently employ.
Dell, IBM, and HP server engineers and designers must be actively looking into the same storage memory technology.
And it’s not just server manufacturers. Storage suppliers with an interest in PCIe flash are also looking at this topic. For example, I’m convinced that WD with its Virident PCIe flash acquisition is looking at the field, as well as Fusion-io. There is a go-to-market issue for the non-server suppliers, as in, who do they sell to?
Do they pursue IEM deals with the server suppliers, or retrofit deals with independent system vendors?
Moving on, in some scenarios a bunch of clustered storage memory DIMM servers with could avoid the need for an external flash array and talk to persistent external storage disk drive arrays for bulk capacity.
I’m seeing storage memory DIMMs as predominantly a server supplier play, and one that limits the applicability of all-flash arrays. Am I smoking pot here? Have my hack’s table napkin-class ideas gone way past reality? Tell me what you think is real here - and if reality bites my ass then I’ve learnt something, which will be good.
"There would need to be a software layer, like Memcached, to present storage memory as pseudo-RAM to applications."
Is this not the job of the operating system? Why shouldn't this type of storage look just be treated like a block device? Applications needing raw speed can just mmap.
A processor is an expensive controller
DIMM sockets are connected to a processor. They are there to provide RAM for the processor. An inexpensive processor (still not cheap) can support 4 DIMMs. A processor that supports many more DIMM sockets is much more expensive. IN effect, the processor becomes a very expensive Flash controller. Conclusion: It's cheaper to add all that flash onto a PCIe card. A 16x PCIe provides very high throughput without occupying valuable DIMM channel capacity.
If the Flash DIMM also provides RAM, The equation changes and the DIMM makes sense. Newer types of memory that provide RAM functionality and static storage are also suitable for DIMM, but are not yet commercially available.
Nothing new here. Move along
This looks very much like re-inventing a single level memory architecture.
AFAIK the first implementation - a concept that IBM implemented in the early 1970s as Future Series and cancelled because of its project impact on the 370 series. It was revived as Syster/38 and put on the market late 1979. This morphed into the AS/400 range, which became, in turn, iSeries, System i and nor the Power Series running OS/400. This range has a single, large storage pool that contains the OS, programs whether running or not, and data objects. Everything held in disk-mapped persistent memory that is paged into a RAM cache for execution and immediate access. IOW, unlike conventional computers, which have two separate disk i/o systems (one for accessing files/directories and another for accessing virtual memory pages, OS/400 has just a virtual memory paging system.
The Apple Lisa also used a similar single level storage scheme.
Re: Nothing new here. Move along
The first implementation of single level store was the Atlas computer in 1962, but the one I'm familiar with is Multics, the grandaddy of most operating systems. The project started in 1964 and was at its peak when I first learned to program on it in '73. We used PL/1 and when you opened a file, you got back an address (actually a segment) pointer which you mapped your structures to. I never had to use the traditional read/write operations. The *nix mmap() function does something similar (but not as cleanly, IMHO).
More relevant to storage memory is the idea of multi-level memory where the paging system would fetch pages from a progressively slower (and cheaper) hierarchy of devices. Multics implemented a paging hierarchy that was originally core memory, drum, and then disk. Later on, it was DRAM, core, then disk.
Here is the paper from '75 which describes the subsystem and concludes that the availability of cheaper memory outweighs the complexity of the page management:
Of course the storage industry is looking at this. Chris, I told you so at SNW in October last year, but you wanted to grill me about the "bump in the wire" cache vendors... Ah well.
For posters here, don't get carried away by memory storage schemes and thinking that it's a solved problem, or even an easy problem to solve. It's not.
For anyone that's interested, here's the background and what's being done; http://snia.org/forums/sssi/nvmp, and in particular, this presentation; https://intel.activeevents.com/sf13/connect/fileDownload/session/461EB56CC073EA43BDFCEC22AE2D3C88/SF13_CLDS009_100.pdf
Next year's tech? Probably not. But it will come; byte addressed, persistent and cheap memory is just too attractive given what we have now. More information can be had by contacting the NVM group at SNIA.
DIMM outlook on that
Putting a NIC interface on a large capacity piece of storage should be a much larger seller than putting the storage in the memory of one's PC. Having the NIC storage (NICmem) would allow one to have their storage for their PC, their Tablet, & their phone in one location & pretty much available all the time.
Eliminating all the hardware between the internet and the data should save considerable amount of hardware (read energy) costs as well. Maybe Google will then be able to move their memory a few more blocks away from the Bonneville dam to a lower rent district.
This will be true until the first case occurs where someone or some institution has their data pilfered by a government and used against them. Then the addition of DIMM to the memory space of a PC will brighten somewhat.
Re: DIMM outlook on that
"Putting a NIC interface on a large capacity piece of storage should be a much larger seller than putting the storage in the memory of one's PC."
Until one tries to access the storage and finds the network is down or the storage device has gone "Phut"
If memory/storage is large, cheap and persistent, then it makes economic sense to install it and use asyncronous replication between devices/storage.
- Mounties always get their man: Heartbleed 'hacker', 19, CUFFED
- Batten down the hatches, Ubuntu 14.04 LTS due in TWO DAYS
- Samsung Galaxy S5 fingerprint scanner hacked in just 4 DAYS
- Feast your PUNY eyes on highest resolution phone display EVER
- AMD demos 'Berlin' Opteron, world's first heterogeneous system architecture server chip