Sounds like a normal wear-leveling system... anything *really* new?
SanDisk's ExtremeFFS technology can speed random writes to flash memory up to 100 times but doesn't do anything for sequential writes. How does it work? The company says ExtremeFFS (Extreme Flash File System) "operates on a page-based algorithm, which means there is no fixed coupling between physical and logical location. When a …
Sounds like a normal wear-leveling system... anything *really* new?
"SanDisk has also added a process whereby pages that are often accessed sequentially are placed contiguously so that their access is speeded up."
I thought it were one of the cool things of flash that it does *not* matter where the data you want to read out is placed. Is there a (hopefully small in comparision) overhead in the controller if one adress differs much from the last, does it need to power up individual cells first or is this just a marketing thing (people are so used to defragment that they will insist on it existing on SSD too =) ?
I have been watching developments in SSD for some time mainly with an eye to giving older laptops a new lease of life. With the cost of a pen drive being as low as 8Gb for about 10-12 quid why are these SSDs so expensive Gb/£? Ok I understand there additional I/O and more sophisticated wear leveling techniques and I assume faster read/write access times etc but is the technology so different to demand such a premium?
Should I just get a IDE-CF (yes IDE, not SATA...), Adapter and plug in a fast 120/133x speed 16Gb CF instead? But then I won't get any of the benefit of wear leveling right?
For when you are perplexed by a story?
For Exapmle : ExtremeFFS, what do they think they are doing!
Nice to see a clear explanation. Something the Inq. failed to do...
Oh FFS, couldn't they have found a better acronym?
This sounds to me like it's just what Fusion-io were doing on their ioDrive, with specs of 100,000 IOPS quoted... until some users decided to try a larger test and saw performance drop by 90% as the devices filled.
Now Fusion-io's spec sheet says just 4,000 IOPS sustained.
I'd be interested to know if these devices have the same limitations. They're almost certainly still going to be fast in every day use, but it's not nice to know that your storage could come grinding to a halt at any moment.
Back when SCSI interfaces could to 1MB/s, you could reformat a SCSI disk with a new sector size. Microsoft can only handle 512 bytes/sector, so over time support for larger sectors disappeared. (Larger sectors mean a higher capacity because there are fewer inter-sector gaps, but waste time or space when many files are smaller than a sector.)
Nand flash typically has pages from 2 to 8K. Last time I used Nor flash, the page size was 64K. You can change any single 1 to a 0 in Nor flash, but it takes time. It is more efficient to write as many bytes at once with the chip allows (32 on that 16MB chip with 64K pages). The only way to change a zero to a one is to change all the zeroes to ones in an entire page. It used to be possible to change a few zeroes to ones in Nand flash. Modern devices cannot do this. It is only possible to write or erase an entire page.
When Nand flash is packaged up to pretend to be a hard disk, the operating system will issue some 512-byte writes all over the place. The wrong thing for the disk emulation layer to do is to read an entire 8K page, change 512 bytes of it, erase the page and write the data back. Sandisk have finally caught up with JFFS2.
JFFS2 is a Linux file system designed for Nor flash that is not hidden behind a hardware disk emulation layer. All writes are go sequentially to a single page until it is full, then the next erased page is used. This make some of the data on previous pages irrelevant when a more modern version is written. When there is only one erased page left, (or if there is a lull in disk activity), a full page is selected and its useful data is copied to the erased page and the selected page is erased. This leaves one erased page and one partially written page, so further writes can go into that partially written page.
JFFS2 is old tech. It takes a long time to mount large filesystems because the kernel has to read the entire device to map out where the most modern version of all the data is. OK for my ancient 16MB chip, but not so good for 1GB - which is a bit small by modern standards. There are newer shiner flash filesystems in Linux. Unfortunately I rarely get to play with them because most flash is hidden behind a defective disk emulator.
Nand flash comes in two flavours: ordinary, which is fast and costly per gigabyte and multilevel cell which is slow and cheap. You can make a fast SSD out of ordinary Nand flash, or by writing to multiple channels of multi-level flash simultaneously. The most profitable solution is to use multi-level flash with a single channel controller and sell it at a high price to people who do not check if the sustained transfer rate is tolerable.
Big SSD's are expensive because people will pay lots of money for the reduced latency. If you want these things at a good price, wait a bit.
... The same as the reduced price in SSDs these days - MLC Flash. MLC is very cheap compared to SLC, but it is a magnitude slower than SLC. The SSDs you get these days are often MLC Flash-based, and what ExtremeFFS does for Sandisk is speeding up MLC Flash operations to the point where it becomes almost cheaper to buy one of their ExtremeFFS-driven multi-channel SSDs in favour of SLC Flash-based versions.
Look at it, 40% of the channels used purely to "hunt down" pages to be erased while the other 60% are used to read or write, this is definitely a performance improvement, and because of the intrinsic difference in timings, randomises wear-leveling. I like it.
Even 133x isn't that fast. Theoretically you get about 20MBps and the fastest CF around (300X) should give up to 45MBps. However, that's theoretical and often cheap CF cards are asymmetric in performance. You get far faster reads than writes.
However, the real problem with CF Nand flash will be that random write speed will be atrocious. You could get as few as 10 writes per second against the 80 random writes or more you would get from the most modest of laptop drives.
So for writing large sequential files, such as a Digital camera might produce, then (moderately) cheap flash might be acceptable, but as a general purpose disk replacement they will be dreadful. The SSD drives have got extra things to speed this up somewhat, but prices for now are high. However, flash prices are dropping through the floor so hold on.
For the most part, the best way of speeding up an old PC is to add more memory. Putting in a faster disk has more limited benefits and cheap flash would be awful.
Thanks for that, was really interesting to see an explanation of how the difference types work in practise (NAND/NOR).
Thanks for the reply Steven, I'll sit tight for now and see how things go :)