Flash fails more than disk. Mac Observer cites a French website showing hard disk drive failure rates (1.94 per cent) were slightly better than solid state drive failure rates (2.05 per cent). Hardware.fr obtained its component failure rate statistics from an unidentified retailer's repairs and returns database. The hard disk …
Count 'em, six
I just ain't living right, 'cause I got six of those damn Seagate frakk-ups. Of frelling well, now I got WD and Samsung and some other deserving soul is getting the trash. So far so good. Maybe the platter pushers should work on reliability and not size. Any good hooker on a Saturday night will tell you staying power is more desirable. And after all, these ladies do know a little about whoring and taking people's money.
Sex workers' preferences...
...would surely be short and fast, not duration. Get it over and move to the next customer. They are not in it for pleasure. Or do you think their screams and moans are uniquely authentic to you?
Let's get a sense of proportion here.
These are domestic devices, not Enterprise, Military or even Space grade products. Typically we can expect a failure rate of 10%.
2% possible failure with a 3 or 4 year warranty is a miracle for the price we pay.
Is there any real IT professionals here?
I mainly find that the problems arise when updating the firmware to get better performance. Last firmware updates on my drives bricked them. Though the RMA process was all good.
Err... What is defined as a failure?
What is defined as a failure?
Brick/Doorstop style failure? Uncorrectable read error, but otherwise relocatable (by SMART) sector with the user returning it because they do not know how to fix it? Or what?
IMO even the 1TB WD Caviar Green EADS which is one of the better performing 1TB drives has waaaay higher sector deffect rates than many drives of old.
Out of the two I have in my home server, one has already produced 26 sector relocations within 1.5 years. For a comparison the 2 x 250GB Maxtors I had before that had 0 relocations in nearly 3.5 years and are still trucking along in their 5th year and counting.
I'd be interested to see laptop failure rates compared to see if the AFR of a laptop HDD -in day to day laptop use- was worse than a laptop with SDD. When worked in laptops we'd see a failure rate of about 8%/year, just because the disks get treated so badly. Thrown about, dropped, kept in bags while still spinning (and then overheating), etc, etc. SSD storage in laptops can only be a good thing.
Real world experience
This is a link to a whitepaper outlining Intel's experience based on 36,000 SSD's deployed to date. Big reduction in AFR compared to HDD.
I was looking forward to an article detailing how Adobe Flash causes drives to fail.
All in the name
Maybe just calling them Flash makes them as flaky as Adobe Crash player?
I read the headline and got the lyrics to "No shit Sherlock" stuck in my head.
Just because your maxtor has reported reallocations, it doesnt mean there arent any defects on the drive! I bet offline scan is not enabled.
Flash is not that reliable
Since when has it not been a given that NAND Flash is inherently unreliable?
That's the trade-off between NAND vs. NOR Flash - High density, low cost, low reliability vs low density, high cost, high reliability.
ECC and wear-leveling can only go so far and it'll probably be a long time before a NAND flash chip can take the same hammering as a hard-drive. Who knows, maybe memristors will take off before then.
Apples and Oranges
Without knowing the size of the SSDs this is meaningless. If I've got 2TB of data to store then for a meaningful comparision I need to know what is the chance of a failure occuring somewhere in the big pile of SSDs I'd need to store that data. Without knowing the SSD capacity I don't know how big that pile is...
First off, these numbers likely reflect drives sold more than a year ago, when capacities were <64GB in common drives. Since it's a retailer, I would also assume they don't peddle out the high-end drives as much (at all?). With a small amount of GB to play with, the drives are likely being used as system drives with a swap file (as default) on the C:\ drive as well. After a year (or more) of use, it wouldn't surprise me to see some drives start to buckle. Wouldn't put it past these home users to still be running WinXP too, and thus not getting the Win7 benefit of sector alignment, etc, not to mention the imaging programs that are sometimes shipped with the drives (most disk imagine programs don't take alignment into account *cough* Acronis *cough*).
Without full info of model numbers (at the least), this data is, as stated, a peek, but far from useful. Other question would be: is the numbers shown the % vs other drives, or %vs drives-of-type sold? One would assume that the return-failure-rate would be % vs drives-of-type, but one can't be caught a fool because of an assumption.
Retailer vs. Manufacturer
Since the numbers are from a retailer, failures would probably be limited by their return policy (15-45 days). After that, defective hardware would pass through the manufacturers' RMA process. Therefore I doubt swap file configuration would factor into these numbers.
That said, this article offers no insights that cannot be found from larger retailers that offer purchase rating feedback like Amazon and NewEgg.com.
Apples and Oranges v.2
What I would have liked to see here is some comparison on price - I'm guessing that a lot of circa 100GB disc drives fail, but are simply binned, whereas SSDs of any size tend to be more expensive, so even the small ones will get returned when they fail.
I run 50+ intel SSD machines here and only ever had one "failure" which was simply s.m.a.r.t errors (all the data was fine). I rang up intel and it was pretty hard to RMA as they didn't believe there was a problem!! Thats either a really reliable drive or really bad support.
I've RMA'd more than 50 Intel SSDs for full and total failures. Granted I'm running slightly (a whole hell of a lot) more of them than you are.
MLC NAND is pretty crap at storing data, and it's only going to get worse as we shrink process size. Write longevity is absolutely not an issue in current 34nm chips, but you're looking at a 10x decrease in program/write/erase cycles when everything moves to 22-25nm, which is really going to make longevity an issue, and it's not going to help other reliability issues at all either.
I've had loads of problems with Western Digital drives. One was a HD crash and since then I learned to check SMART status regularly.
I caught another WD drive when it failed SMART Status (WD exchanged it for a new drive).
I also had a Seagate drive fail SMART.
I've never, ever had a problem with a Hitachi HD, ever. And I have some Hitachi drives in constant use for over 7 years.
"Hardware.fr obtained its component failure rate statistics from an unidentified retailer's repairs and returns database. "
No doubt from the hard drive of a customer's computer after they received the machine back from repairs, eh?
Please donate a beer
From that graph it seems to me that SSD is more reliable than HDD, except OCZ (the one I have, darn). Who is still buying 1TB anyway? 1.5 TB is the absolute minimum these days :) Unless we talk 10,000 rpm or... SSD?
I'd also like to know...
...if it was SLC or MLC - I have never thought the latter was a good idea for reliability. Cheap nasty consumer-grade tech. Ich!
From my experience with WD and a couple old Maxtor external drives I have been using for a while the Raptors and the 1Tb Caviar Blacks I have have run a combined total of 12 years so far without an error what so ever. The Maxtors (one of which I have had dropped repeatedly while accessing data on it from a height of about 4 ft) have had no problems whatsoever which surprised the hell out of me for that one drive. I miss Maxtor. :(
I don't understand this nonsense about expressing failure rates as percentages.
Any device will fail if I keep it and use it for long enough.
Failures are usually expressed as MTBF, mean time between failures.
If they're producing stats on failure rates then there has to be a mention of time somewhere and they haven't mentioned it at all.
if you want higher densities in the same physical form factor, then failure rates have to increase.
And as time goes by, as the feature size of the integrated circuit technology reduces, failure rates will increase.
I'm not convinced everything needs to be as small (and as least reliable) as possible.
I look at electronics from the 1980's, using say 3 micron geometries. I've seen chips from before then lasting 25 years and still going strong, I don't buy this attitude that motherboards don't need to last long because people will replace them within 3 years.
I'm guessing quite a few people don't bother returning a dead drive, either because of the hassle or because their data is on it. They just swear a bit and get a replacement. Returns would also only be within the guarantee period. I've had drives work for ten years quite happily. If they suddenly start dying a month after the guarantee period is up, that would be a serious issue that wouldn't show up in data like this.
Given that this is tech, you'd expect a more scientific approach to failure data. It would be more useful if it came from a large organisation that logged all its drive failures.
How long is this test over?
I important piece of information that I don't see here is how long this test if over? 1 year? 6 months? Out of the box? Kind of important to know, and have this data updated over time to see if the failure changes with age. Because, without a timeframe, one could make the clam that 100% of SSDs AND hard drives drives fail.... Given enough time.
I'd be interested to see what Apple (née Computer) Inc. does. They include SSDs in some of their more high-profile offerings. If SSDs fail more often than HDDs (and often enough to affect profit margins), I imagine that they'd switch to HDDs instead. Customer dissatisfaction is not good with high-cost, emotionally-charged products.
We'll have to wait and see.
You mean Hard Disks fail?
Ye Gods - why did no-one tell me?
Now I might actually have to use that backup software I've been backing up for the past 10 years,
mind you - the only time I needed it - Acronis informed me it simply could not perform the backup as the indexes were incorrect - so bloody useless software.
Most common return reason
Is the user tries to copy very large files (over 4GB) to their brand new drive and the poor old Fat32 drive coughs and splutters before giving up. Customer returns drive as broken...
It just needed a NTFS reformat.
- Comment Renewable energy 'simply WON'T WORK': Top Google engineers
- All ABOARD! Furious Facebook bus drivers join Teamsters union
- Webcam hacker pervs in MASS HOME INVASION
- Nexus 7 fandroids tell of salty taste after sucking on Google's Lollipop
- Useless 'computer engineer' Barbie SACKED in three-way fsck row