back to article SanDisk pitches 100x SSD speed boost tech

SanDisk has come up with a tweak for solid-state drives that, it claims, will accelerate SSD random write speeds by a factor of 100. Dubbed ExtremeFFS, the technology is a Flash file management system that decouples physical and logical storage, allowing data to be written to a drive randomly yet very quickly. Essentially, …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Thumb Down

    New name for old tech

    Apaprt form the core idea being based on a 12 year old product, all it boils down to is a bit of cache and a lazy writer. This sort of tech should have been included in flash storage form the beginning. Unless, could I be missing something?

  2. amanfromMars Silver badge
    Alien

    Championing a Product/Program with its Beta Use in a Special Application .... Virtual Operation.

    "Indeed, why didn't SanDisk use this system sooner?" It is most likely being presently used/betatested by them...... hence their confidence in its performance.

  3. Tim Spence
    Joke

    FFS...

    ...has meant something completely different on the internets for long before the technology was invented.

    Maybe it means "Why won't this expensive SSD work faster FFS!"

  4. evilbobthebob
    Joke

    FFS?

    That's what we'll be saying when it doesn't work...

  5. Ken Hagan Gold badge
    Unhappy

    Cache?

    Sounds *awfully* like a cache to me, like the ones we've had on (spinning) drives for yonks. The upside is that it works without the OS needing to have a clue. The downside is that if your system goes down before the device has flushed the cache, your data is trashed despite the drive having told the OS "yes, I'm done". Of course, you could add a new command to let the OS wait for the drive to flush the cache, and you'd be back where you started.

    This is pointless. The place to put a cache is in the OS, where there is more RAM and where the cache can be shared amongst other devices and shrunk or grown depending on load.

  6. Bronek Kozicki
    Thumb Up

    RAM?

    I guess the "system" only works well with large cache, and the memory was not so cheap back in 2004 as it is now. Also, only since recently SSD is perceived as viable alternative to HDD.

  7. Nigel
    Boffin

    On the other hand

    On the other hand no hardware magic inside the device would be needed, were it not desired that it be formatted by the host operating system using a decade-old filesystem (FAT32) based on another decade-older version (FAT) that was notorious for wearing out the index sector of the floppy disks on which it once lived. Instead, why not just give *full* control over the storage chip to the OS, which would wear-level and optimise by deploying an industry-standard open filesystem designed with this medium in mind. (Then watch the Linux community come up with an even better filesystem that goes twice as fast! )

    Yes, a floppy disk was another nedium that really, really needed a wear-levelling filesystem!

    In passing XP installs much faster off CD into a newly created VMware VM on a 2Gb RAM system, than natively onto the same hardware. Amazing what intelligent use of cache memory and lazy writing can accomplish, isn't it?

  8. Dennis Healey
    Paris Hilton

    Cache Fix

    So we want to use a write behind cache + wear leveling in order to speed up mass storage. We still have the same problem of inadvertent power failure / system abnormality etc. causing data loss.

    In my simplistic world this is solved by the simple expedient of isolating the card memory behind the interface in such a way that the card interface (whether on board or off board) detects a power failure and has enough residual power to write the cache. Similarly the card's isolation means that an os failure abnormality is dealt with by the storage system in a normal way (IE writes every Xms) or by the device when the operator re-boots.

    This strikes me as eminently doable and engineers love users who say that sounds easy! So tell me what is wrong with this idea.......

    The difference between then and now is that then we used power hungry and relatively slow disks and now we have Solid State devices operating at low power quite quickly.

    Paris who knows how to cope with a late data dump.

  9. Jasmine Strong
    Heart

    Hey, Nigel

    ...that's exactly what JFFS2 already does on Linux. And yes, it is faster, more reliable and more efficient than any NFTL type product.

  10. radian

    quick question...

    '.....will accelerate SSD random write speeds by a factor of 100'.

    100x faster than what? Their current SSDs (which are pretty slow to start with)? The intel X-25M? An Mtron Pro?

    I still want one anyway.

  11. Paul Murphy
    Paris Hilton

    So should SSD's have an on-board battery?

    So that when the power goes down they have some to empty the write cache?

    Sounds like a good idea until the battery needs replacing :-)

    PH - doesn't need batteries at all though you could 'cache' all sorts of diseases from her.

  12. Anonymous Coward
    Anonymous Coward

    Nothing wrong with Caching.

    HyperOs do a complete ram drive.

    So does gigabyte (http://www.youtube.com/watch?v=Gp676uX3EXA)

    Both of these has battery backup. The HyperOs actually allows does a system which backs up automatically to SSD in a power failure.

    These two alternatives redefine performance for me.

    I see no problem at all with having a tiny rechargable battery in an SSD, to provide a minute's power, if the solution allows this kind of performance for writes.

    I've long since thought that DVD drives, and Harddrives should have their own Dimm slots so users can buy cache ram. A DVD ram with a 8 GB ram would transfer the whole hour of music into cache in 90 seconds, and from then on not spin up again, likewise an inserted DVD would take 7 minutes of spinning, after which it would provide a random access seek of basically nothing.

    Similarly, with very little work, and 16GB of ram on a hard drive, Windows XP's read only files, such as C:\Windows and C:\Program Files could be cached on first use, or on lazy read. They almost never change. You could even cache files, just on the basis of the head passing over a track on the basis that "We're going to track 5 sector 3 to get something. We might as well cache most of Outlook.exe on the way, as we're passing over it to get there."

    I'm surprised, bearing in mind the cost of Ram, that Microsoft hasn't got huge ramdisk mirroring of files that aren't modified. It's obvious.

  13. Peter Kay

    cache and power loss...

    This is different from a cacheing RAID controller attached to fixed disks how?

    It's true that losing power will usually result in data not being written to the device, however this is likely anyway as most applications use the OS disk cache. Stick the computer on a UPS and the controller is also protected - a battery backup on the RAID controller is useful, but not used that often.

    @Dennis : controllers don't power the drives enough to write the data - they power the controller memory long enough for sysadmins to power the system back up again and the cached writes to be written. This uses vastly less power.

  14. Steven Jones

    @Ken Hagan

    Write back caches using non-volatile memory are extensively used in any decent medium point storage array worth buying. That gets write times down to the sub ms level, and it's all perfectly safe. "All" that is require to make this safe in a flash drive is to have sufficient power available to flush the cache out before power dies. A decent sized capacitor (especially now we have super-capacitors) should be plenty to keep the electronics going long enough to do it.

    Non-volatile write caching most definitely belongs in the storage device - not the server. Server caching is great for reads and non-synchronous write buffers, but it's absolutely the wrong place for non-volatile synchronous write caches, especially where you have shared access storage. Lose the server and the data is stuck in the non-volatile cache server and not where it belongs on the storage device.

    The problem with write times on SSDs, at least for all but the ultra-expensive ones, is on small random write latency. I don't care a jot how the SSD does it, as long as it happens quickly and securely. Non-volatile write-caching in the SSD is fine by me.

  15. Ken Hagan Gold badge

    @Steven Jones

    Thanks. The word "non-volatile" wasn't part of the original article. With that tiny addition, everything makes sense.

This topic is closed for new posts.

Other stories you might like