back to article Hey, Samsung: Why so shy about your 960GB flash drive's endurance?

Samsung has gone and used cheap-to-make triple-level cell NAND chippery to make an SSD for data centre use. Will it catch on? Triple-level cell (TLC) flash chips mean fabs can extract more flash capacity from a silicon wafer, and so production costs are lower than for two-level cell MLC technology. Samsung says it gets "a 30 …

COMMENTS

This topic is closed for new posts.
  1. Paul Crawford Silver badge

    Rule #1 of Data Sheets

    Rule #1: if it is not specified - it sucks.

    Rule #2: they probably lied with the stuff that doesn't suck.

  2. TheRealRoland

    A slightly older article, but lots of info.

    http://uk.hardware.info/reviews/4178/10/hardwareinfo-tests-lifespan-of-samsung-ssd-840-250gb-tlc-ssd-updated-with-final-conclusion-final-update-20-6-2013

    and more generic, a bit off-topic, about the samsung caching:

    http://techreport.com/review/25282/a-closer-look-at-rapid-dram-caching-on-the-samsung-840-evo-ssd

    I have an 1TB EVO, granted, just in a work laptop. So far no issues, but i'm sure i'm not using it like a data center would.

  3. Anonymous Coward
    Anonymous Coward

    Beyond a joke.

    I have a Terabyte Samsung. I put about a Gig a day on it, tops, maybe a couple of Gig or 3 a week.. That means that at 500 * 333 weeks it will last until I'm a shade under 3300 years old.

    I'm really worried.

    1. razorfishsl

      Re: Beyond a joke.

      Problem is.. the failure mode is not gradual with nand-flash it will go suddenly and if the area that goes is a structure control area, you will NEVER get your data back…this is because the storage algorithms are proprietary as are the chip spanning algorithms, in some cases even the manufacturer does not know where the data will be stored across multiple chips, because it is based around each chips individual failure pattern…

      1. Anonymous Coward
        Anonymous Coward

        Re: Beyond a joke.

        I do multiple backups to different devices. One of these is a mirrored HDD.

        That said, it may go, but I've got a dozen or so SSD's in use on different boxes. None have gone so far.

        However, since an SSD has a life of say, 3000 years, and none have had to be replaced yet, but I've not had a HDD survive 5 years, then I feel stats are on my side.

        1. Paul Crawford Silver badge

          Re: Beyond a joke.

          "since an SSD has a life of say, 3000 years"

          Mistake #1, you assume that erase/write is the only failure mode, and not due to, say, ion migration under voltage stress, etc. Most devices have a lot of failure modes, but often only 1 or 2 are dominant and you may find SSD have lives under read-dominated operations of 5-10 years max.

          However, having it mirrored with another device, such as a cheaper HDD, gives you a sporting chance of surviving a failure without problems. (Incidentally the more recent Linux RAID software supports write-mostly for situations like that where IOPS differ a lot between the storage devices).

      2. lurker

        Re: Beyond a joke.

        @razorfishsl

        I understood that you could still read data from SSDs once they hit their write endurance limit? That's why it's a WRITE endurance limit surely? You just can't change the data any more.

        And regardless, enterprise usage of SSDs should be based upon the usual redundancy strategies so loss of a single drive != loss of data, same as for the usual chunks-of-spining-metal drives.

    2. Malcolm Weir Silver badge

      Re: Beyond a joke.

      And you shouldn't be worried, because that sort of usage is perfectly suited to that technology.

      However, if you were running a data center, perhaps instead of 1GB per day or so, you were writing a few dozen short transactions a second; each transaction might represent an update of (say) 200 bytes of data, which requires a rewrite of one or two 512 byte sectors, which require a rewrite of 1 or 2 1MB erasure blocks. Sure, clever caching and optimization can reduce the number of sectors and erasure blocks that get written, but you would be fortunate (in a normal "random" workload) to do much better than a 10-to-1 improvement (so for every 10 erasure blocks that you might have to rewrite, you actually only need to rewrite 1).

      So in that sort of application, you may only be writing 200 bytes x 64 transactions per second (i.e. 12,800 bytes per second, or 1.1GB/day but the drive's flash is not seeing that; what it sees is(2 erasure blocks x 1MB per block x 64 transactions / 10 cached advantage) per second.

      Which is 1.1TB per day, and if you put it in service today, it will fail sometime after July 25th 2015 (452 days from now).

      That's not entirely horrible (a standard HD can be expected to die after about 4 times as long), but it's a poor choice for a data center.

      All of which boils down to horses for courses...

      1. DaLo

        Re: Beyond a joke.

        You could mirror two SSDs even in a server environment and then swap one of the drives out for a new one after 6 months ~ year.

        You'll then find that they are unlikely to die at the same time and if you have a hot spare will happily rebuild fairly quickly.

        The drive you take out after 6 months can start up a new mirror with a new drive to have a similar scenario.

        You should get warnings when all the over provision blocks are getting used up and so can make sure you don't hit the WE limits.

  4. Myvekk
    Thumb Up

    SSD endurance test...

    Not sure exactly how it compares to this drive, but Techreport are running a comparison of several 240GB SSDs. Their latest update was that they had written 600TB to them so far. Even the Samsung 840 Pro was still going well, although can be seen to be using up its oversupply blocks.

    http://techreport.com/review/26058/the-ssd-endurance-experiment-data-retention-after-600tb

  5. razorfishsl

    die fast

    Problem that these manufacturers don't tell you, that not only do these Nand flash devices have poor re-write counts, but also that even IF you put the data in, it cannot remain in the same area of the device for too long, it MUST be re-written or you will loose the data.

    There are masses of scientific papers available with information on data disruption/corruption caused when nearby areas are written, Quite shockingly 500 times is not the lowest.

    There are 'NDA' documents from major nand-flash manufacturers stating Re-write as low as 300-400 times even for devices that have been used for currently available product and USB pluggable devices.

    There is a 'code' letter included in the top of the Nand-flash chip packaging that specifies the capability, but as stated the reference documents are NDA, even worse many times you will get product that states the devices are from a given manufacturer, but actually the Nand-flash dies are NOT from the same supplier, they have been re-branded or re-packaged at the die level…..

    Again this data is NDA

    1. Paul Crawford Silver badge

      Re: die fast

      Rule #3 of data sheets - NDAs exist because they suck at something or another, and don't want it more widely known or compared..

  6. jb99

    Any auto backup solutions?

    I'm thinking for home use I want the speed of a SSD but don't want to lose my data. Are there any solutions which use the SSD but have a hard disk too and automatically back up everything to the hard disk so that if the SSD fails you can restore from it and continue as if nothing had happened?

    1. Matt_payne666

      Re: Any auto backup solutions?

      You could buy a hba with drive tiering where flash is used for cache and cold data flushed to disk...

      LSI's cachecade or similar...

      As for Samsung.... on this drive, maybe they have overprovisioned the drive to a silly level... they aren't dart and know that a data centre doesn't want to be swapping slowish drives too often

    2. admiraljkb

      Re: Any auto backup solutions?

      @!jb99

      Linux Software RAID1 can do it as Paul Crawford mentioned, although in the laptop use case that will not work. For my machines, I use CrashPlan with it backing up to both my house NAS as well as their cloud. That way if any of my SSD equipped boxes go kaboom, I'm covered with only 15 minutes of potential data loss.

      1. Matt in Sydney

        Re: Any auto backup solutions?

        But we are not typically talking about laptops here, or backyard data centres. We are talking about buildings the size of a warehouse with multiple layers of redundancy and sophisticated robotics to maintain the array.

        The devices will be obsolete before the expected end of life in any case.

        At home, we all take our chances, and diarise to replace in 5 years regardless ?

        I cannot see myself using the same laptop in 20 years, they are disposable, although many enterprises still run servers well over 10 years old for legacy applications.

        It will soon be hard to buy platter based DASD, It will go the way of the CRT television.

    3. Alan Brown Silver badge

      Re: Any auto backup solutions?

      Linux raid with write-behind.

      FreeNAS (bsd distro with ZFS filesystem)

      The latter solution simply uses the SSDs as cache so it's no big deal if it dies.

  7. a_mu

    multi level storage optoins

    So

    what we need is an automatic multi level storage system, lets call it a cache.

    where data that has been accessed a lot is in say DDR SDRAM, data less accessed is in single level flash, down to tri level or quad level write once flash.

    A lot of the stuff on my discs, is write once , read many,

    very little is write multiple / read multiple,

    so 'just' need some gear that can move data around as needed !

    now that could be fun,

  8. Alan Brown Silver badge

    for my use

    500 full write cycles is plenty. A drawer full of these (ie, 200TB+) would end its support life before the drives wore out.

This topic is closed for new posts.

Other stories you might like