Rule #1 of Data Sheets
Rule #1: if it is not specified - it sucks.
Rule #2: they probably lied with the stuff that doesn't suck.
Samsung has gone and used cheap-to-make triple-level cell NAND chippery to make an SSD for data centre use. Will it catch on? Triple-level cell (TLC) flash chips mean fabs can extract more flash capacity from a silicon wafer, and so production costs are lower than for two-level cell MLC technology. Samsung says it gets "a 30 …
A slightly older article, but lots of info.
http://uk.hardware.info/reviews/4178/10/hardwareinfo-tests-lifespan-of-samsung-ssd-840-250gb-tlc-ssd-updated-with-final-conclusion-final-update-20-6-2013
and more generic, a bit off-topic, about the samsung caching:
http://techreport.com/review/25282/a-closer-look-at-rapid-dram-caching-on-the-samsung-840-evo-ssd
I have an 1TB EVO, granted, just in a work laptop. So far no issues, but i'm sure i'm not using it like a data center would.
Problem is.. the failure mode is not gradual with nand-flash it will go suddenly and if the area that goes is a structure control area, you will NEVER get your data back…this is because the storage algorithms are proprietary as are the chip spanning algorithms, in some cases even the manufacturer does not know where the data will be stored across multiple chips, because it is based around each chips individual failure pattern…
I do multiple backups to different devices. One of these is a mirrored HDD.
That said, it may go, but I've got a dozen or so SSD's in use on different boxes. None have gone so far.
However, since an SSD has a life of say, 3000 years, and none have had to be replaced yet, but I've not had a HDD survive 5 years, then I feel stats are on my side.
"since an SSD has a life of say, 3000 years"
Mistake #1, you assume that erase/write is the only failure mode, and not due to, say, ion migration under voltage stress, etc. Most devices have a lot of failure modes, but often only 1 or 2 are dominant and you may find SSD have lives under read-dominated operations of 5-10 years max.
However, having it mirrored with another device, such as a cheaper HDD, gives you a sporting chance of surviving a failure without problems. (Incidentally the more recent Linux RAID software supports write-mostly for situations like that where IOPS differ a lot between the storage devices).
@razorfishsl
I understood that you could still read data from SSDs once they hit their write endurance limit? That's why it's a WRITE endurance limit surely? You just can't change the data any more.
And regardless, enterprise usage of SSDs should be based upon the usual redundancy strategies so loss of a single drive != loss of data, same as for the usual chunks-of-spining-metal drives.
And you shouldn't be worried, because that sort of usage is perfectly suited to that technology.
However, if you were running a data center, perhaps instead of 1GB per day or so, you were writing a few dozen short transactions a second; each transaction might represent an update of (say) 200 bytes of data, which requires a rewrite of one or two 512 byte sectors, which require a rewrite of 1 or 2 1MB erasure blocks. Sure, clever caching and optimization can reduce the number of sectors and erasure blocks that get written, but you would be fortunate (in a normal "random" workload) to do much better than a 10-to-1 improvement (so for every 10 erasure blocks that you might have to rewrite, you actually only need to rewrite 1).
So in that sort of application, you may only be writing 200 bytes x 64 transactions per second (i.e. 12,800 bytes per second, or 1.1GB/day but the drive's flash is not seeing that; what it sees is(2 erasure blocks x 1MB per block x 64 transactions / 10 cached advantage) per second.
Which is 1.1TB per day, and if you put it in service today, it will fail sometime after July 25th 2015 (452 days from now).
That's not entirely horrible (a standard HD can be expected to die after about 4 times as long), but it's a poor choice for a data center.
All of which boils down to horses for courses...
You could mirror two SSDs even in a server environment and then swap one of the drives out for a new one after 6 months ~ year.
You'll then find that they are unlikely to die at the same time and if you have a hot spare will happily rebuild fairly quickly.
The drive you take out after 6 months can start up a new mirror with a new drive to have a similar scenario.
You should get warnings when all the over provision blocks are getting used up and so can make sure you don't hit the WE limits.
Not sure exactly how it compares to this drive, but Techreport are running a comparison of several 240GB SSDs. Their latest update was that they had written 600TB to them so far. Even the Samsung 840 Pro was still going well, although can be seen to be using up its oversupply blocks.
http://techreport.com/review/26058/the-ssd-endurance-experiment-data-retention-after-600tb
Problem that these manufacturers don't tell you, that not only do these Nand flash devices have poor re-write counts, but also that even IF you put the data in, it cannot remain in the same area of the device for too long, it MUST be re-written or you will loose the data.
There are masses of scientific papers available with information on data disruption/corruption caused when nearby areas are written, Quite shockingly 500 times is not the lowest.
There are 'NDA' documents from major nand-flash manufacturers stating Re-write as low as 300-400 times even for devices that have been used for currently available product and USB pluggable devices.
There is a 'code' letter included in the top of the Nand-flash chip packaging that specifies the capability, but as stated the reference documents are NDA, even worse many times you will get product that states the devices are from a given manufacturer, but actually the Nand-flash dies are NOT from the same supplier, they have been re-branded or re-packaged at the die level…..
Again this data is NDA
I'm thinking for home use I want the speed of a SSD but don't want to lose my data. Are there any solutions which use the SSD but have a hard disk too and automatically back up everything to the hard disk so that if the SSD fails you can restore from it and continue as if nothing had happened?
You could buy a hba with drive tiering where flash is used for cache and cold data flushed to disk...
LSI's cachecade or similar...
As for Samsung.... on this drive, maybe they have overprovisioned the drive to a silly level... they aren't dart and know that a data centre doesn't want to be swapping slowish drives too often
@!jb99
Linux Software RAID1 can do it as Paul Crawford mentioned, although in the laptop use case that will not work. For my machines, I use CrashPlan with it backing up to both my house NAS as well as their cloud. That way if any of my SSD equipped boxes go kaboom, I'm covered with only 15 minutes of potential data loss.
But we are not typically talking about laptops here, or backyard data centres. We are talking about buildings the size of a warehouse with multiple layers of redundancy and sophisticated robotics to maintain the array.
The devices will be obsolete before the expected end of life in any case.
At home, we all take our chances, and diarise to replace in 5 years regardless ?
I cannot see myself using the same laptop in 20 years, they are disposable, although many enterprises still run servers well over 10 years old for legacy applications.
It will soon be hard to buy platter based DASD, It will go the way of the CRT television.
So
what we need is an automatic multi level storage system, lets call it a cache.
where data that has been accessed a lot is in say DDR SDRAM, data less accessed is in single level flash, down to tri level or quad level write once flash.
A lot of the stuff on my discs, is write once , read many,
very little is write multiple / read multiple,
so 'just' need some gear that can move data around as needed !
now that could be fun,