Wow, a 7% increase
I remember the good old days, when new generations of hard drive had 50 - 100% capacity increases. Without some major capacity increases, hard drives will eventually not make economic sense even for nearline storage.
Western Digital has claimed its shingled 15TB Ultrastar DC HC620 is the highest capacity disk drive in the world. It is a follow-on from the 14TB Hs14 drive, available in either 14 or 15TB capacities. Both the Hs14 and the HC620 are helium-filled drives with, we understand, eight platters. WD_Ultrastar_DC_HC620 Western …
"Without some major capacity increases, hard drives will eventually not make economic sense even for nearline storage."
"Eventually" being the important word there. Even with major capacity increases, it's pretty much a certainty that hard drives will eventually become obsolete. So will tape. So will our current version of solid-state storage. Exactly when any of that happens will depend on the details of exactly what technological developments happen when, but there can be no doubt that it will happen. Eventually. But aside from navel-gazing futurists, what matters is what's actually available now and likely to be available in the near future. No-one actually cares if hard drives will stop making economic sense in 20 years or 100 years; developments like this are important for deciding what makes economic sense right now.
Data tends to be more worth than the medium it is stored on. So you cannot afford a drive to fail. And now with 15 TB drives the rebuild time for a 8 drive RAID 5 array is scary and you sure hope you have recovered .... before the next drive fails.
So it will have to be RAID6 with one hot and one cold standby drives. Plus incremental backup on a separate storage server. I have 25 years of data that is important to me, much of it is material I have made. And it is getting increasingly demanding to keep it safe. And the more you look into it, especially at the stuff written by serious users such as national archives, the more complex it appears.
> You need a modern raid scheme ala IBM's DRAID among others.
I had a look. From what I could see all disks are active but with spare slices. Perhaps I am missing something here but to me that sounds like all drives are worn equally, while a hot standby is far less worn and would last longer in an intense rebuild. IBM tends to know its stuff so what am I missing?
Why do you care about "wear"? Hard drives don't wear out from writing like SSDs, and a hot standby that sitting around idle and not spinning might fail immediately once it starts getting used since it isn't getting properly tested.
I'd much rather get the benefit of ALL spindles, and have them all in action so I don't find out the hard way my hot standby was a dud when I need it most.
RAID rebuilding may be days or weeks on consumer devices like Drobos but it's not so bad on better equipment. On my last disk shuffle, a Synology box and a Linux box with ZFS each did their rebuilds at about 1 TB/hour while still being in use.
Businesses with huge amounts disks don't even need to have rebuilding turned on. These SMR drives are most likely write-once archives. In that scenario you can move the data elsewhere then treat the repaired array as new empty storage.
Anecdotally I had to clone the knackered 1TB drive of a laptop the other day and that only took just over an hour, and that was from a drive with loads of SMART errors, onto the cheapest 1TB 5400rpm drive we could find on Amazon. I'd hope that a 15TB archive drive would be at least as fast.
>Anecdotally I had to clone the knackered 1TB drive of a laptop the other day and that only took just over an hour, and that was from a drive with loads of SMART errors, onto the cheapest 1TB 5400rpm drive we could find on Amazon. I'd hope that a 15TB archive drive would be at least as fast.
You can rebuild 15 TB onto a 15TB drive at the rate it takes to write 15 TB onto a 15TB drive. However, this requires the source drives to be doing nothing else. The reality in most enterprise systems is that they are still having to do a lot of I/O. This isn't true when you're fixing your laptop though. I had to rebuild a dead 6 TB drive recently. Took me a few days from backups but I wasn't in a rush.
The point of distributed sparing and parity is all drives take part, so you only need to read from and write to each of them a small amount. The actual amount you need to read from each drive is a linear proportion of the number of drives in your RAID set if you're rebuilding from a single failure. If you have double parity you can distribute each of the two parity blocks independently, so you only have to rebuild the exposed stripes first. This makes that relationship reverse-exponential: with 50 drives in the RAID set you will need to read/write about 8% of each drive (rebuild time will be ~hours depending on the drive type), but with 500 drives you only need to read less than 0.08% of each drive (rebuild will be ~minutes even with large/slow drives).
The more drives you can thrown in, the better, basically. If you can spread your data across hundreds of drives the speed of the media becomes less relevant and the speed of transfer between them becomes more important.
And on top of that, being a WD drive, you'll need to be doing those RAID rebuilds 2-3 times more often.
(had sworn off WD drives many years ago, but figured they *must* be better now. NOPE, bought a brand-new 2TB drive, it was dead right out of the box. Back to WorstBuy with it, and on to Amazon to find something usable like Toshiba or Seagate).
I have some 3TB WD greens from 2011 in a zfs array which are still going strong, and they are in use pretty much constantly. Not heavy use, but there is stuff happening all the time. I swapped out a dead Seagate from the array a couple of months ago, that was the first one to die in 4 years.
WHY the heck would I get this when Samsung and others are offering 60 Terabyte SSD Drives (YES! SIXTY TERABYTES!) on ultra fast way-beyond-spinning-disk transfer speeds. Next year the 100 TB SSD drives are coming out AND even today there are companies that offer hard-soldered PCB boards with TWO PETABYTE-levels of SSD flash chips that can be racked 20 high for 40 PETABYTES in a single 1.8 metre high (72 inch) rack!
Yeah! The drives is currently more than a few quid, but the 60TB or teh 40 Petabyte systems is a nice space to have!
>The drives is currently more than a few quid,
You've kind of answered your own question here. Yes SSDs can be made with huge capacities, but not for anything close to the price. Unfortunately few people have bottomless pockets and price is an important factor.
When SSDs can be produced for a price/TB close to HDDs then HDDs will stop being relevant. It's still a long way off though.
"WHY the heck would I get this when Samsung and others are offering 60 Terabyte SSD Drives (YES! SIXTY TERABYTES!) on ultra fast way-beyond-spinning-disk transfer speeds."
For the same reason I prevented my previous employer from storing backups on their brand new FCAL SAN: Cost where it doesn't need to be spent ($30/TB CAD at the time vs $1000/TB) These things are cheap, and throwing a bunch of them in a NAS is still cheap and perfectly good for data that does not need to be accessed often.
Biting the hand that feeds IT © 1998–2019