back to article SandForce's revolutionary SSD controller

Major storage OEMS are expected to release commodity NAND chip-based flash memory products this year based on a new SandForce controller offering fast and symmetric read and write speeds, 80 times more endurance than notebook flash, and peak performance maintained for five years. SandForce is a newly-visible fab-less startup. …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Stop

    Deleting blocks, bad idea

    Writes do not take longer because you delete cells. the last thing you want to do to an SSD is start deleting blocks because each time you do it, it counts as an erase cycle for the block which means you reduce its life. Writes take longer because it just takes longer to put a charge in a cell than it takes to read it.

    An SSD will only start deleting blocks after all blocks have been written to once. It is a basic wear leveling algorithm which instead of overwritting blocks, it marks them as invalid when you delete the data and does not overwrite them unless you need the space or all the other blocks on the disk have completed an equal number of erase cycles.

  2. Anonymous Coward
    Anonymous Coward

    From AnandTec SSD article

    'A single NAND flash die is subdivided into blocks. The typical case these days is that each block is 512KB in size. Each block is further subdivided into pages, with the typical page size these days being 4KB.

    Now you can read and write to individual pages, so long as they are empty. However once a page has been written, it can’t be overwritten, it must be erased first before you can write to it again. And therein lies the problem, the smallest structure you can erase in a NAND flash device today is a block. Once more, you can read/write 4KB at a time, but you can only erase 512KB at a time.'

    http://www.anandtech.com/storage/showdoc.aspx?i=3531&p=6

  3. Flocke Kroes Silver badge

    @AC 01:30

    I am afraid it is less sane than that.

    There are physical sectors - specific areas of silicon on a chip, and there are logical sectors that the operating system asks to read and overwrite. There is also a map that converts logical sector numbers into physical sector numbers. The map is stored in physical sectors, and maintained by the flash controller chip. The operating system does not have access to the map.

    When you ask the operating system to delete a file, the operating system writes a few sectors. One will be the directory containing the file. Another will be the sector containing the file's inode (file size, creation/modification/access dates, where the file is stored on the disk). There may also be some changes to sectors to account for some newly deallocated space.

    The flash controller writes the data for each of these logical sector writes to some pre-erased physical sectors, then updates the map so that requests requests to read these logical sectors return data from the newly mapped physical sector. At some point the physical sectors that contain the old contents of the directory, inode, free space list, journal and so on get erased. The flash controller has to keep track of which sectors are erased, how many times they have been erased, and where the map is. Writing a single logical sector to flash can result in several physical sectors being written or erased.

    During all of this, you may have noticed that none of the data for the file that was deleted has been modified at all. The flash controller will dutifully preserve the file's contents until the operating system decides to allocate those logical sectors to a new file.

    A good way to massively improve the performace of flash disks would be to add a new disk command that allowed the operating system to tell the flash controller which sectors no longer contain useful information. This would give the flash controller advanced warning of which sectors can be erased, so it has a wider choice of erased sectors to choose from for the next write operation.

    An even better solution would be to forget about complex flash controllers completely and let the operating system maintain the logical to physical map itself. (Linux already has a choice of controllerless flash specific file systems available, but linux users get to pay for expensive performance hitting hardware that works around lack of development in a certain operating system).

  4. Anonymous Coward
    Paris Hilton

    RE: Deleting blocks, bad idea

    You really do need to clear the block before you can write to it. It's how flash has always worked and yes it's the reason the disk has a life span. With your logic, you would never ever need to "erase" anything - merely "write" over it and never degrade the lifespan.

  5. Anonymous Coward
    Anonymous Coward

    Sequential writes seem fine

    ...but how does it perform when doing random writes? Nobody wants another JMicron controller that is fast sequentially, but becomes slower than laptop drives when doing random writes.

  6. Rick White
    Alert

    Reality Check

    Dramatically reduce an ioDrive's write performance down to that of SandForce's controller and you will easily exceed their five-year rating.

    This is because the single largest determinant of longevity is how fast one writes, and under real-world use (mixed read / write load), the SandForce controller will achieve 1/4 to 1/8 the write performance of an ioDrive.

    In other words, their endurance as measured by how much data can be written to the drive over it's lifetime is actually less than an ioDrive.

    That's because the second largest determinant of longevity is how many write cycles can safely be made on the NAND without risking data loss. Here, Fusion-io's unique and patent pending FlashBack Protection allows for several times more cycles - because it can tolerate chip failures and not loose data.

    SandForce's longevity extending techniques are well understood in the industry, and don't significantly alter the underlying endurance cycle counts like FlashBack protection does.

    Oh and yes, I'll admit I'm biased, but it doesn't change the facts.

  7. Ross Fleming

    Pipeline

    Not sure if this is already done, but surely the simplest way to get around the delays waiting for erases is to introduce a pipeline structure. i.e. erase the next cell at the same time as writing the previous?

    Mind you - I don't fancy this whole compression on the fly thing. Once stung by DoubleSpace, always wary...

  8. TeeCee Gold badge
    Joke

    ".....comes from Nvidia."?

    That one came up twice too.

    Next weeks tech tip: Frying an egg on your SSD controller while ripping DVDs.

  9. Peter Kay

    Compression - great idea as long as it's backed by 1:1 storage

    Think about it - compression is absolutely ideal for SSD, provided it has at least as much physical flash as stated - and preferably more.

    Given that the slowdown (as per the anandtech article) is due to the flash being fully utilised and then having to resort to slow erase cycles of a large chunk of flash, anything that prevents it reaching that limit is to be applauded. Anything that prevents the need to erase a limited erase cycle media before absolutely necessary is also to be encouraged.

This topic is closed for new posts.

Other stories you might like