Nantero, a start-up developing carbon nanotube-based memory, has gained more than $10 million in a Series D funding round to help commercialise its NRAM technology and get licensees bringing products to market. NRAM is one of several candidates identified as potential successors to NAND, with others including Phase Change Memory …
Just a thought... if this type of memory is used to replace RAM in a system then just how the hell are we going to get an errant OS working again? The main fallback situation of IT support is still "turn it off, wait, turn it back on" (i.e. restart)
We'll be knackered unless there's a hardware, not software controlled, function to clear the contents of memory and to start again from a clean slate! :)
watch out for that reset button coming your way!
If reset buttons return can I also request Turbo buttons do likewise?
Depends on how the OS initialises memory when it's allocated.
Just taking it as is and relying on it being "empty" is extremely bad practice with volatile memory, it just becomes astonishingly daft with non-volatile.
A cold boot will still be a cold boot, with memory being allocated and initialised as required, regardless of what may still be in there from previous times. If what was in there does cause a problem on restart, then you either have a bug in your memory allocation or a fault in the memory itself.
Having said that, a bit of code in the BIOS to zero RAM on a cold start would be a sensible precaution. Hey, the return of running through a memory check on boot!
This is likely lead to better software, as any failure to correctly initialise allocated memory is far more likely to become painfully obvious in testing with non-volatile memory.
RAM or disk?
The interesting thing is that the separation between RAM and disk is gone, on the physical layer. Of course, you can just carve a chunk of non-volatile memory to act as if it was volatile. If you do this on firmware level (no need to call it BIOS any longer) then to CPU it can still be presented as "RAM", but it is also slightly inefficient.
Trouble is that we have no programming model for fast (the other article mentioned 3ns latency) and non-volatile memory. It is as if all the memory access hierarchy was gone - cache, DRAM, filesystem. Yes that would make for a wonderful simplification, but how do you program your memory allocations when memory manager suddenly merges with the file system? And you will do it, eventually, because cutting the layers will be most useful in a world where you have lots-and-lots of a very tiny cores and hardware supported transactional memory they use to communication with each other. Which just happens to be the current direction of active research.
The second step
Storage Class Memories will be upon us in a year or two, but there isn't serious discussion about the system implications of fast persistent storage. It won't be a case of just using it as a DRAM that doesn't lose memory. Operating systems don't recognize that type of memory, so a major change is needed here.
The applications are written around disk storage, and need a major rewrite to work with SCM, especially if word updating is supported.
All in all, the availability of the memories isn't going to be the pacing item, unless some action is begun on the software side. Having been involved in an SCM program, I understand the size of that mountain, and it isn't small! There are ways to get some functionality quickly, but the full potential of SCM requires changes to compilers, link editors, memory copy routines, machine check handlers, boot loaders, sleep routines, and the file IO system. A good model for SCM is an extended page memory, but that code predicates a slow disk save.
- Vid Hubble 'scope snaps 200,000-ton chunky crumble conundrum
- Bugger the jetpack, where's my 21st-century Psion?
- Google offers up its own Googlers in cloud channel chumship trawl
- Windows 8.1 Update 1 spewed online a MONTH early – by Microsoft
- Interview Global Warming IS REAL, argues sceptic mathematician - it just isn't THERMAGEDDON