How are they steering the laser?
Surely using a parallel beam and a DLP chip, or similar, would bring it back to conventional speeds. If they're using a perpendicular beam, well, that sounds complex, bulky, and expensive.
Boffins in Taiwan and the University of California predict that nanoscale CMOS memory could soon be on its way after research showed nanodot memory operating 10 to 100 times faster than current RAM. The electro-optics researchers also emphasised that they had used materials that are compatible with mainstream integrated circuit …
why do you think we have fast caches for chips? Imagine the entire memory working at the speed of the CPU. That would be awesome.
At the moment, I have to think hard about cache friendly processing orders. Getting it wrong can incur a 10-fold speed penalty, easily. If have a set of for-loops to traverse an image, having the x-coordinate loop outside the y-coordinate loop is tens times slower than the reverse, because of the way images are stored (row by row). A step in x moves to the next element in memory (= cache hit with standard read-ahead), whereas a step in y steps a whole row of data further, yielding a cache miss.
Such simple cases are easily sorted out, but some image processing has data-driven processing orders, very frequently requiring odd memory jumps. In these cases getting rid of latency is a godsend. Also, think of multi-core: ensuring cache coherence is a pain. Older Cray machine had no cache, and the memory worked at the speed of the CPU. This is much simpler and yields much better parallelization.
CPU/RAM communication speed is indeed a big bottleneck, which is why there is so much attention paid to CPU caching issues both on the hardware and software sides.
Now, faster RAM is clearly important but it's pretty much useless if the bus speed isn't up to par. Unfortunately I don't have the faintest idea whether bus speeds can easily be improved, maybe some competent commentard can enlighten us on this matter?
This post has been deleted by its author
I was wondering the same thing, and as i've not yet read the paper and don't have an AIP subscription (or want to pay just for the curiosity) I hope we might have someone who has who can enlighten us.
I was equally mystified by the description of 7V for a write/erase cycle as 'low'. Both the write cycle timing and voltage are ok in relation to NVRAM technologies such as EEPROMs, but make no sense in the DRAM usage context.
My guess is that since the only mention of RAM is outside of the quote (which says non-volatile memory,) maybe El Reg got it wrong? It certainly does look more like a replacement for NAND (or possibly NOR) Flash rather than DRAM. A 1us p/e cycle would be a hell of an improvement over current NAND.
Also, +/- 7V doesn't seem remotely 'low' compared to any sort of modern volatile or non-volatile solid state memory. Maybe they mean it's low compared to alternatives that are currently in development? Or low compared to their last prototype? It certainly doesn't scream 'efficient operation on battery power,' which is kind of necessary for mobile use. Then again, DC-DC converters aren't all that bad these days.
as usual, yet another faster RAM story that is going nowhere any time soon ( the next two years) until the general public can actually buy these faster ram products in shops no one really cares do they!
there's been far to many stories proclaiming faster RAM that we still cant actually buy in retail i don't care what they do in the lab any more that's for sure.
Data write (storing charge) and data erase (removing charge)
So... this RAM can only store 1's?
Does that mean it can only store data that uses "zero-compression"? - you know, the compression where they fit a movie onto a 5.25" floppy disk by removing all the 'useless' data - the 0's.
Yes, thats my jacket with the highly illegal terrorist weapon (green laser pointer) in the pocket. Back off man, I'm an architect.