RAID 1
I ain't even mad.
Hear that siren? It's an ambulance chaser. Seagate is facing a class-action lawsuit from lawyers representing one pissed-off Barracuda disk drive user – and they want more people to join in. Acting on behalf of one Christopher A. Nelson of South Dakota, the lawyers say the drives failed more often than they should have been …
Correct. During the rebuild process, all the data is read from the other drives from end to end in the RAID and you're very likely to trigger yet another drive failure with these drives. Now you have at least two failed drives. In RAID 5, you're toast. In RAID 6, there's still a chance to keep going to rebuild the most recently replaced drive. I've had RAID 6 systems with 8 of these drives and lost the RAID because of this domino effect. Apparently the 4 TB versions do not display this behavior.
Really, I'm almost always ready for some lawyer bashing.
But when you look at those crooks at Seagate, its high time and a joyous event.
Remember when they sold those 2TB "power of one" Barracuda drives, that were supposed to have 1GB per platter, but they gave an identical model number to another drive with 3x 666GB platters from older Chinese factories, so it was impossible for the average customer to tell if they got the fast reliable drive or the slow, unreliable drive?
There wasn't even a way to determine ahead of time which drive you'd get from your distributor - you'd have to take delivery first and then complain if you got the wrong ones.
Imagine you plug just 1 of the 30% slower 3-platter drives into your raid: the whole thing will suddenly run slow.
I quit buying Seagate drives, as I don't do business with crooks.
Will be very difficult to prove *unless* an awful lot of other people with similar issues so that statistically significant high failure rates can be implied.
Plenty of scope for (often unknowing) user abuse of externals e.g. total muppetry such as movement whilst in use, leaving to get very hot (plenty of areas near windows in my house that even in dismal UK get noticeably hot on a sunny day & where you would definitely not want to leave electronic kit).
Although drive unit would be sealed, if that external has any vents to allow airflow for cooling then basic housekeeping helps by ensuring not choked up with dust
And as ever, math comes back to bite you, mean time to failure stats (if available) are all well & good but a device can still fail early so overall failure stats needed not those of 1 person.
an awful lot of other people with similar issues so that statistically significant high failure rates
Or one company with an awful lot of drives
https://www.backblaze.com/blog/wp-content/uploads/2015/07/blog-fail-drives-manufacture-2015-june.jpg
https://www.backblaze.com/hard-drive-test-data.html
https://www.backblaze.com/blog/hard-drive-reliability-stats-for-q2-2015/
https://www.backblaze.com/blog/hard-drive-reliability-q3-2015/
So, Mr Nelson made a 'backup' and then deleted the files from his main drive?
Obviously that is what he did, or he'd still have a perfectly good copy of the lost files on his main drive. In my books that makes him 100% responsible for the data loss because the act of deleting the originals converted his backup into the prime copy but he didn't back up the now prime copy onto another disk and store that offline in a place that protected it from fire or other damage.
No data can be considered safe unless at all times there is a copy that cannot be damaged by mains spikes, fire, etc. This means that you have a minimum of two backup copies so that at least one copy is guaranteed to be safely stored offline at all times. Paranoid, moi? Yes, and a firm believer in Murphy when it comes to protecting data you value.
So, Mr Nelson made a 'backup' and then deleted the files from his main drive?
You see that sort of stupidity so many times on nas box forums and you can't convince them that once they delete the original files they no longer have a backup, all they have just done is moved the files from one place to another.
The other thing you see is people who think that because the NAS box has twin redundant disk drives, there are two backups.
They really should make small NAS boxes with software that activates a red light and an irritatingly loud beeper when one of the drives fails. Because many people think they'll be magically informed when a drive fails, even if they've never told either the NAS box or the warranty-registration form their e-mail address, mobile number, etc.
Not that some of the cheaper ones would generate an alert even if you had.
AFAIK every disk drive vendor has shipped some models that are lemons in its time. It is inevitable. Every drive in production is in effect a prototype. By the time any particular model of drive has a five-year track record of acceptable reliability, it is obsolescent and no longer manufactured. The manufacturers do do accelerated ageing tests, but they cannot catch all failure modes or guard against batches of faulty components from their suppliers.
If you are populating a RAID with mirrored pairs, the absolute worst thing you can do is buy two identical drives. One Seagate and one WD is a better bet than two WD even if you can prove that the WD has five times the MTBF of the Seagate (which you can't, see above). That is because if one drive fails because of a batch of faulty components, the others from the same batch won't be far behind. The one from a different manufacturer is least likely to contain components from the same faulty batch.
Airlines know better than to service both engines on a twinjet at the same time. (It's also forbidden by air safety rules, for the same very good reason).
Backups issues aside, the 3TB Seagate Barracuda drives are a real problem. We use thousands of large consumer drives for cheap storage (using RAID60 with duplicated copies across the globe.)
We've used Hitachis, Western Digitals, Toshiba's and Seagates. The systems with the 3TB Seagates had a 25% failure rate after one year of service...and it wasn't until that time that we realized they had dropped their warranty from 3 years to 1. We thought maybe it was just a bad batch, but even the replacements started failing when they approached a year. Not only would these drives fail, but most of the time they would fail in spectacular ways...preventing a system from booting because they were confusing the RAID Backplane and/or SAS/RAID card.
It became such a problem, that we ended up buying 400+ new drives to replace every Seagate 3TB we had. Finding that Backblaze article above was one of the reasons we went ahead and ate the cost. We ended up losing several systems to those seagates, as a RAID60 simply wasn't enough redundancy.
Our loose running theory is that the 3TB came out right around the time that the floods ruined hard drive factories over seas, and Seagate cut corners to stay in the market.
Now...I think this guy probably should have been more careful..but his claim is accurate from my perspective.
There are various sources that show the ST3000DM001 3TB drives have an higher than normal failure rate.
There was a Russian (I Think) Article that seemed to show they suffer from a bad seal that causes dust to enter the clean-zone within the drive and effectively destroys it. Annoyingly I read a (poorly) translated version of it after following a link and can't remember where it was.