Out of curiosity, how does this endurance compare with a traditional hard drive?
The Tech Report has been running an ‘SSD Endurance Experiment’ utilising consumer SSDs to see how long they last and what their "real world" endurance really is. It seem that pretty much all of the drives are very good and last longer than their manufacturers state. It's a fairly unusual state of affairs – something in IT that …
That flash RAM fails after sufficient write activity is fundamental to the physics of how it operates. In contrast a magnetic film does not wear out, however many times it is written to, merely as a consequence of being written.
On the other hand, the moving parts in a hard disk do wear out, usually in a slowly progressive manner. Also any contamination inside the HDA can cause gradual degradation of the read head and magnetic surfaces (and sometimes much less gradual degradation - catastrophic failure known as a head crash, akin to an aeroplane flying into a mountain instead of grains of wind-blown sand)
Anyway, my experience is that quite a few hard disks "never" fail (ie, they are declared obsolete and junked while still working perfectly after five or ten years in service). Many fail gracefully: you get warning that they are deteriorating through their SMART statistics, and you can hot-replace them proactively if you are using mirroring or RAID or just shut down and clone to a new disk with ddrescue. A good fraction of the rest are rescue-able even after failing hard as far as an operating system is concerned, i.e. you can use ddrescue to copy them to a new hard drive with no loss of data after many re-tries, or with only a few sectors unreadable. Only a smallish percentage go from disk to brick "just like that", and a majority of those inside their first month in service ("infant mortality").
The controller of a flash drive must surely know how many pages have failed and been replaced from the pool of spares. So what's going on? Are SSD controllers not being honest with their SMART statistics (for example with SMART 182, " Erase Fail count")? Or did the testers simply write until failed, without monitoring the statistics to see whether impending failure was easy to spot? Or are there whole-chip failure modes with flash storage, that make abrupt failure far more likely than with other VLSI chips such as hard disk controllers? (Well, there are 8 or 16 more VLSI chips in an SSD, so maybe 8 to 16 times the risk).
More research needed.
"The controller of a flash drive must surely know how many pages have failed and been replaced from the pool of spares. So what's going on? Are SSD controllers not being honest with their SMART statistics (for example with SMART 182, " Erase Fail count")? Or did the testers simply write until failed, without monitoring the statistics to see whether impending failure was easy to spot? Or are there whole-chip failure modes with flash storage, that make abrupt failure far more likely than with other VLSI chips such as hard disk controllers? (Well, there are 8 or 16 more VLSI chips in an SSD, so maybe 8 to 16 times the risk)."
What's happening is that it's the controller that's failing first, rendering everything else moot.
"Still, if I was running a large server estate and was looking at putting SSDs in them, I would probably now think twice before forking out huge amounts of cash on eMLC kit and I would instead be looking at higher-end consumer drives."
I'm one of the people that recently thought long and hard about doing this - the premium for eMLC is just horrible, but in the end, we felt like we had no choice since this was for a mission critical database.
Comparing SSDs rated for 3 DWPD over 5 years to SSDs rated for ~300 total writes (which over the same 5 years amounts to just about .16 DWPD) does in the end justify the major price difference.
If all of a sudden we can expect much cheaper SSDs to have similar endurance (the results for the 840 Pro adds up to about the same 3 DWPD), it's definitely worth looking at again.
Of course right now it's just anecdotal evidence. Someone has the budget to burn through 1,000 of them and let me know how it goes?
Decades ago when I was at school I recall daring kids to touch my charged capacitors. Some did which instantly deterred others.
I also remember the charge didn't last very long. Now, SSDs aren't quite the same but the theory of operation's not far off.
SSDs great devices, I use them all the time, but we're still testing their endurance. A few more years will tell. There's also the question of very long-term storage of decades. The message from similar technologies, EEPROMS etc., is mixed, I've some decades old and perfectly OK whilst a few have carked it inexplicably (but the manufacturing tech is much older of course) .
"SSDs great devices, I use them all the time, but we're still testing their endurance. A few more years will tell."
In a few more years we will no longer be able to advance SSDs and will be forced to use new technologies. So your solution to the emergence of technologies is to wait a decade or more after everyone else starts using them in mainstream applications, then, when the technology has reached it's absolute limit of advancement do you adopt it. Do you work for NASA designing probes or something? Do you still store you primary production data on mercury delay lines?
Question: is ADSL an okay technology yet, or are you still just coming to terms with k56 Flex?
Wasn't this the whole premise behind RAID (excluding RAID 0)? You buy cheap drives, expecting them to fail, but since the data can be rebuilt, it doesn't matter.
Of course, the caveat is that "it doesn't matter, provided you can rebuild the array before enough drives fail to destroy the overall data integrity", which may be significant in some scenarios.
Has anyone done any costings of buying a few high reliability eMLC drives vs buying lots of cheap "desktop class" drives, and building a RAID array with lots of hot spares?
"Has anyone done any costings of buying a few high reliability eMLC drives vs buying lots of cheap "desktop class" drives, and building a RAID array with lots of hot spares?"
I went through the exercise recently, for a fairly small database (a few TB) that gets massive amounts of I/O, mostly writes. With customer drives I estimated I'd reach the rated write capacity after ~6 months.
In the end however a fairly big showstopper was the fact that most drives would reach end of life at the same time. Either I'd simply retire the entire array after 5 months and cycle all drives (more maintenance than I care for), or just wait to hotswap all drives as they each failed, and pray that I wouldn't have enough concurrent failures to kill the RAID (which seemed like a real possibility if I evenly wore out all drives to their rated lifespan).
The risk just seemed to big.
They do; as far as I'm aware from the drives I have, most SSDs made in the last two years keep track of smart attributes for things like total amount written, the amount of reallocated blocks, and the amount of "spare area" remaining, read errors, etc etc. TR went into quite a lot of detail on the smart monitoring over the course of the test.
Here's the graph they made from the death of one of the drives:
As you can see, the "life left" counter still had plenty of slack left in it but there was a steep change in reallocated sectors and read error rates shortly before failure. The graphs on the next page show smart graphs for the surviving drives.
Of course, almost no-one keeps a running graph of smart stats (in fact the number of people running a smart monitor of any kind is still very low) but in my own experience smart is more useful for SSDs than it is for platter-based drives. Now if only the counters were more standardised...
Incidentally, the most interesting aspect of this series was the Intel 335; once it reaches its lifetime write rating it goes into read-only mode, and on the very next power cycle it bricks itself.
HP's latest 3PAR SSDs all come with an unconditional 5 year warranty.
Oct 9, 2014
"The functional impact of the Adaptive Sparing feature is that it increases the flash media Drive Writes per Day metric (DWPD) substantially. For our largest and most cost-effective 1.92TB SSD media, it is increased such that an individual drive can sustain more than 8PB of writes within a 5-year timeframe before it wears out. To achieve 8PB of total writes in five years requires a sustained write rate over 50MB/sec for every second for five years."
("Adaptive Sparing" is a 3PAR feature)
another post about cMLC in 3PAR:
Nov 10, 2014
"It seem that pretty much all of the drives are very good and last longer than their manufacturers state. It's a fairly unusual state of affairs – something in IT that does better than it states on the can"
Interesting take, from what I had heard the manufactures warranties were often set by the failure rate, so statistically they only have to cover the outliers which died early. Then after the date covered passes the majority will die in a common range, with another small percent carrying on far longer than expected. That they are still going over is probably due to the relatively (for manufactured goods) short time SSDs have been on the market, compounded by how often node shrinks come.
About 6 months ago we moved our production database (about 30TB) from disk to SSD. Tests showed that moving to SSD would give us a 30% improvement in I/O performance. We needed that improvement so the solution we purchased not only uses an array of SSD drives, the entire array is mirrored to a backup array, with each array having a hot spare SSD. The device also uses redundant, hot swappable controllers, with a third controller as a hot spare. Even the fans are redundant and hot swappable. Expensive, yes, but speed and availability were much more important than cost. What surprised us was that once we moved to SSD not only are we getting the expected I/O boost, which is reducing the cost of processing each transaction (and we process about 6 million transactions per day so that cost is a serious consideration) but the system uses much less power and requires less cooling then spinning disks so through reduced operating expenses we expect the new solution to pay for itself in about 2 years.
So in short it is my experience that if done correctly SSD is a much better solution then spinning rust.
>What surprised us was that once we moved to SSD ... but the system uses much less power and requires less cooling then spinning disk.
Wow that comment tickled my salesmen spouting BS meter some. Doing even a rudimentary amount of research would told you that was true which if this was such a big mission critical project would be nearly an HP total lack of due diligence if anyone was surprised by it.
These SSDs were never powered off not once. that is the fail in this testing. Endurance is rated for data retention with the power off. Several of the guys who did the initial testing (that this testing is based off of) had SSDs 'die' after one second of power off after they had reached their warrantied writes. If you removed power from any of those SSDs, period, even halfway through that test every single bit of data would be gone.
"These SSDs were never powered off not once. that is the fail in this testing."
Perhaps it should be noted that since the most common mode of failure is "sudden catastrophic" the main point of failure is not the flash chips but the controller handling them. I guess for the low price point it would be too much to ask to install a backup or replaceable controller unit for the drive.
So noted, in SSDs the controller tends to fail before the actual media. Kind of reminds me of a story of someone looking for a used piano bench and finding out they were hard to come by because pianos tend to outlast the benches, meaning many were scrapped and replaced altogether, reducing the supply.
Interesting, this just prompted me to check the Crucial website for software/firmware update for my recent SSD's. Turns out there a firmware update from about a week ago for MX100's:
Nothing critical sounding, but looks useful for better SMART stats and power/error handing.
Other than the cheap drives, EVERY SINGLE one of the SSDs flagged that it was at end of design life a _long_ time before they actually failed.
This test was based around the question "How long can we run them before they actually fail HARD"
The SMART data on the drives was returning "Lifetime expired" well before this point (about a year in the case of the 840Pro) so you can't say they hadn't given adequate warning. It's arguable that the SMART data was too conservative.
WRT "Flash has limitations" - well duh - so does magnetic media as others have pointed out. So far in IO-heavy operation the Intel X25E flash drives used as spool cache on our backup server have outlasted 3 sets of rotating media on the same machine (they've written at least 3 times as much as the 840pros got and are reporting 96% left)
Phase change media and memristors and other Solid state tech exist. They may or may not eclipse Flash long-term (PCM and memristors are _much_ faster than flash) but in the meantime moving back to 40nm and going to 3D stacking has resulted in flash with greater lifespan and lower latency than the 840pro range (A 10 year warranty on the 85-pro is nothing to sneeze at)
You're a fool if you don't make regular backups and you're a fool if you run your drives past the point where they tell you they're "expired" without making provision for impending doom.
The standard against which the write endurance specs are gauged are described on pages 25-27 of this really excellent presentation: http://www.jedec.org/sites/default/files/Alvin_Cox%20%5BCompatibility%20Mode%5D_0.pdf
The write endurance spec (TBW, usually) means you can write that many bytes to the drive and expect to read back it's capacity, with (for client drives) a uncorrectable BER of 10**-15, after you left it powered off for a year stored at 30dgC. So a week powered off doesn't get you close to the spec.
Why does powered off matter? The flash controller in the SSD is continually monitoring the ECC of valid data. When it sees a bit error, it corrects it and moves the data. As the device gets near its final death, the flash controller spends all its time correcting ECC and moving data. I suspect the performance of these geriatric devices suffers immensely at the end.
Biting the hand that feeds IT © 1998–2019