This isn't quite mass production "Can I get this in my hands right now?" But I will admit pilot production "How about some of you give this a try?" is tantalizingly close.
Two Fujitsu semiconductor businesses are licensing Nantero NRAM technology with joint Nantero-Fujitsu development to produce chips in 2018. They will have several thousand times faster rewrites and many thousands of times more rewrite cycles than embedded flash memory. NRAM, non-volatile RAM, is based on carbon nanotube (CNT) …
"Reliability-wise it retains data for >1,000 years ..."
You have to make certain assumptions about material properties and behaviour to extrapolate that theoretical result from testing that lasts a lot less that 1000 years. CNTs are a very new material so how can they be certain that those assumptions are true for CNTs?
Also, what is the theoretical working lifetime of the associated support and driver electronics? Random gamma rays do pass through everything, etc.
It's an extrapolation of observable properties, using a model for how heat affects molecules over time. In a direct sense, it's a useless figure; any given bit of NRAM will probably get lost or destroyed in some random accident on that time scale, unless you are extraordinarily cautious with it - in which case you'll have backups anyway.
In an indirect sense, though, it may be useful as a comparison with other technologies, as everyone is probably using the same model anyway.
Mostly, though, it's a marketing-friendly way of saying "it doesn't spontaneously lose data".
As an interested computer user, not a tech expert, I find it is very interesting to read about these new technologies and the speed increases that they will potentially pass on to our computers. Speed increases will, doubtlessly, be noticeable but does this then move the bottleneck to the system bus and or HDD/SDD and how many years before the price makes this technology practical for us home users? Will CNT tech make having a wider system practical and is that a problem that others are looking at or maybe it's not a problem?
>Speed increases will, doubtlessly, be noticeable but does this then move the bottleneck to the system bus and or HDD/SDD and how many years before the price makes this technology practical for us home users?
Good questions, for which clues to some answers can be found in the history of the tech we use today. Already you can buy mass storage that sits on your PCIe bus instead of your SATA.
Some advantages, such as an 'instantly wake from a non-power consuming sleep state' might require a tweak to the computers power management system and CPU. The lower power consumption is of greater benefit to embedded and mobile applications than to desktops, though.
In the mid term, a technology that is as fast as RAM and as non-volatile and capacious as a HDD will change how a desktop computer is designed fundamentally. That is, why have separate RAM and mass storage?
Whilst you might not consider yourself an expert, you know better than others where the bottlenecks already are in your system, in relation to the tasks you put it to.
"In the mid term, a technology that is as fast as RAM and as non-volatile and capacious as a HDD will change how a desktop computer is designed fundamentally. That is, why have separate RAM and mass storage?"
It will definite alter the concept of how computers will have to work, especially in regards to how one defines what belongs to what regarding code spaces, libraries, data, and so on. Plus how will this affect existing paradigms like USB Mass Storage?
A bit of editing needed in the paragraph
NRAM seems to be far faster than XPoint, and could be denser. An Intel Optane DIMM might have a latency of 7-9ms (7,000-9,000ns). Micron QuantX XPoint SSDs are expected to have latencies of 10ms for reading and 20 ms for writing; that’s 10,000 and 20,000ns respectively.
it should read
NRAM seems to be far faster than XPoint, and could be denser. An Intel Optane DIMM might have a latency of 7-9us (7,000-9,000ns). Micron QuantX XPoint SSDs are expected to have latencies of 10us for reading and 20 us for writing; that’s 10,000 and 20,000ns respectively.
Maybe they should come up with a better abbreviation before this stuff gets popular?
That aside, I'm not really sure about this part:
"Nantero is seeing two potential markets: stand-alone non-volatile memory such as NRAM DIMMs, and embedded devices needing very fast non-volatile memory."
Its as fast as normal DRAM while using less power, stable enough to use as storage while being much faster than everything else, and already has hundreds of GB on a single chip. It's the classic too good to be true breakthrough that usually finishes articles with "...and they hope to do something commercially in 10-15 years" and is then never heard of again, except this time it's apparently already gearing up for production. So why are they aiming at a couple of minor niche markets when they could seemingly take over the entire world's memory and storage markets?
"So why are they aiming at a couple of minor niche markets when they could seemingly take over the entire world's memory and storage markets?"
When production volume is still low they want to make as much profit as possible. Hence niche market.
Then when it's clear that that production can safely be scaled, they will do that and reduce price.
I noticed that abbreviation. In the same paragraph it also mentioned "hair." I ended up thinking about CNT hairs for the rest of the article. Once an idea like that wedges between your teeth, it's hard to remove.
Cue the City University of Newcastle upon Tyne jokes. And the Northern Ireland Police Service jokes.
>So why are they aiming at a couple of minor niche markets when they could seemingly take over the entire world's memory and storage markets?
Because some of the advantages of non-volatile RAM would be currently wasted in mass-market devices such as desktops and phones. One these devices are designed to take advantage of it, the market will grow and prices will drop - a virtuous circle. There's always been games of chicken and egg in IT! :)
Carbon nanotubes are so over, don't they know everything has to use graphene now? They should re-issue the statement and call it rolled graphene or something then explain how the graphene is made in such a way that it creates tubes just like carbon ones, but better because graphene.
>Are we into mass production (tonne lots) [of Carbon Nano-Tubes] yet?
*It depends upon how sir would like his CNTs. How long d'ya want them? How consistent, how pure? What's your application? Do you want them for their electrical properties, or for their mechanical or electrical properties? What's sir's taste in substrate, if any? We regret to inform sir that we currently have no mile-long tubes available for extreme engineering projects...
The implications of memory technology which is a) faster than dynamic RAM and b) non-volatile, are considerable. Presumably there are research projects addressing this – and it would be interesting if these were, if possible, publicised.
Computer architecture has not change significantly since Von-Neumann’s day, i.e. not much change since 1945: “...a memory to store both data and instructions, external mass storage, and input and output mechanisms.” O.K. they have become smaller and faster but they are still separate mass storage and memory systems.
Along with unifying mass storage and RAM, add in a 128bit data bus and you get instantaneous boot, or never off, essentially unlimited RAM (because the whole world is your memory- IPV6 memory) means that somewhere just over the horizon is the potential for a bigger change in computing than anything we have been through yet.
Well there are a few, one that comes to mind that was referred a few times here at the reg is the HP project called "The Machine". Its focus is precisely system where RAM and persistent storage exist as one unit. The next logical step is the integration of this sort of technologies directly on the CPU package, avoiding all the current bus.
Each CPU core could then have direct access to all data it needs and the only bus required is for external connections(user input+network) .
To a certain extent we are in fact starting to see a move in this direction, the new professional AMD cards include persistent storage to avoid moving data around too much.
supose the next step is the
"Computer architecture has not change significantly since Von-Neumann’s day, i.e. not much change since 1945: “...a memory to store both data and instructions, external mass storage, and input and output mechanisms.” O.K. they have become smaller and faster but they are still separate mass storage and memory systems."
This has been done - repeatedly. Virtual memory was conceived of a way of faking it, this just moves the goalpost towards the "Memory is like an orgasm. It's a lot better if you don't have to fake it" end of the spectrum. Pin-bandwidth will remain a serious bottleneck, as will addressing which also adds latency & burns power. That said, NRAM sounds brilliant - let's hope it lives up to the hype. :)
"Yep. It means all of the technology will crash even faster, and now has the added benefit that turning it off and on again won't fecking help."
Turning it off and on again doesn't help if it's the drive that has the corrupt data. That doesn't change. Backups and error-correcting code won't be going away and may in fact become more important in this kind of system.
You forgot to mention all those tiers of mass storage exist for the purpose of emulating (as best possible) the infinite memory space of a proper Von-Neumann machine. This just flattens the topology slightly.
Funny thing about record keeping, it's pathological. It doesn't seem to matter how much capacity we develop to record information, the amount generated soon grows to fill and exceed that capacity. I don't see this changing anything. It's just more room for cats, plates, and homages to idiocy. Given the infinite nature of the later, no amount of capacity will ever suffice.
"the amount generated soon grows to fill and exceed that capacity. I don't see this changing anything"
Look harder. And don't conflate the amount of information available for storage clogging purposes with the amount of information required for a specific application. Yes, there's so much information floating around in the world that we can always capture more of it to fill any storage medium - on the other hand, I still remember the time when games couldn't afford to include recorded sound (making do with synth ships like Adlib) and full-motion video was absolutely unthinkable simply due to lack of availability of sufficient amounts of storage. Also, storage needs don't quite scale at the rate memory capacity keeps growing - first, you afford to store digitized music, then you afford to store music losslessly, then you afford to store all the music you'll ever care to listen to - then you're done, music-wise. You don't really need more storage. Sure, you can move on to video, but that has a similar nature.
And beyond all that, one should keep in mind that more storage capacity is not really what this technology is about at all - but rather the elimination of the need for two different kinds of memory in computers. Unfortunately, the fact that I can reliably crash and corrupt Firefox down to an actual snow crash within sixty seconds of loading Google StreetView each and every time I use it does not bode well for the implications of no longer having a more or less "clean" state to reboot from. Server software may well be hardened for long term use but we have absolutely, absolutely no idea how to write user applications that don't crash miserably, sooner rather than later...
"Unfortunately, the fact that I can reliably crash and corrupt Firefox down to an actual snow crash within sixty seconds of loading Google StreetView each and every time I use it does not bode well for the implications of no longer having a more or less "clean" state to reboot from."
If you can consistently crash Firefox using StreetView when most people get along just fine, then perhaps you should see if your installation is still clean. That goes to the point I was making. We expect the code stored in our drives to remain pristine, yet that's always been a gamble: down to rogue processes, runaways, and cosmic radiation. If the data on the storage is screwed, we're in the same boat as when a program in storage-class-memory goes. We've lost the known-good state and are into GIGO and will have to reinstall, which would happen either way. Of course, if there's a backup or error-correcting code, perhaps we can retreat back to a good state, but that's not something changing the storage will alter.
Except this is pilot production; this isn't vaporware anymore. This is actual initial production going on RIGHT NOW. Besides, mass production is only about TWO years away given the normal pace of fab retooling (and it helps this tech doesn't seem to require anything too exotic which slows that timetable down). Nantero's probably well aware they have a shot at the checkered flag here, but because their tech has a chance to cause a paradigm shift, they can't jump the gun and risk being ahead of their time.
If that happens we're pretty much screwed since the level of energy contained in cosmic radiation (remember, they're at the very top of the electromagnetic spectrum, both heavier and more energetic than alpha particles) means they can go through nearly everything. It would take shielding of the kind you see in nuclear power plants, only much more so, to keep them from your goods.
Biting the hand that feeds IT © 1998–2019