The disk drive industry went bananas in 2013, driven hard by cloud storage and flash, both sending the problems of bulk, nearline data storage disks' way. The three big HDD tech issues? SMR technology seems a near-term bust except for big cloud users Helium drive technology arrived with HGST's 6TB offering Seagate announced …
i know this would present a problem with replacement drives but why dont they increase the phyisical size of the hard drives from say 3.5" to the 5.25" so that more platters can be installed and let us reach higher TB capacity per drive.
Asking cause im curious
Re: 3.5" drives
Good question, presumably engineering the spinny bits gets harder the wider and heavier the platters are and the cost of that outstrips the improvement in capacity?
Re: 3.5" drives
That's a bloody good question. Come back Quantum Bigfoot drives, large platters spinning slightly slower, cooler running, better space utilisation, cheaper per TB, what's not to like?
Re: 3.5" drives
Because currently everyone is being brainwashed into thinking smaller physical size = better.
Smaller is better...
Of course you can increase capacity by increasing the disk diameter. Indeed, until not so long ago 5.25 inch drives were commonly available. Going back rather further, hard drives could be as large as 14 inches.
However, there are serious problems with doing this. Firstly is the simple issue that you have to spin these disks slower in order to maintain dimensional stability, Larger platters will distort more at high RPM due to the higher peripheral speeds and (therefore) increased forces involved. Go back three decades, and large disks were effectively limited to 3,600rpm, over four times slower than the fastest current enterprise drives, which currently run at up to 15,000rpm. (Whilst 15K drives are nominally 3.5 inch form factors, in practice, the actual platter diameters are smaller to allow this speed to be reached), There's also a more subtle reason why larger disks have to be spun slower. Quite simply, even if a 5.25" platter could be spun at 15K rpm, bit density would have to be reduced as it wouldn't be possible to write the data fast enough as there's insufficient time to polarise the substrate. Reducing the bit density would reduce the capacity so would, at least in part, undo much of the value of increasing form factor.
Large format drives take longer to seek to any given track as there's further to travel, and achieving the required level of accuracy for high track density becomes increasingly more difficult with larger form factors.
Then there is the issue that disk drives are essentially serial storage devices in that they can only read and write one track at a time (albeit with relatively fast access to a given track). It's not possible to put multiple read/write heads into an HDD due to vibration, airflow and other issues (other than old-fashioned fixed head drives, not feasible with modern bit densities). Note this applies even if you increase the form factor in the other dimension by just adding more platters - which also makes the mass to be moved higher.
The upshot of all this is your multi, multi-TB megadrive is going to be horribly slow due to very high rotational latency and seek times. Even the sequential access will be slow in terms of the amount of time it would require to read the entire drive as you'd be stuck with approximately the same transfer rate as we see currently. In fact tape has an advantage for sequential access as it is at least possibly to add more heads to increase the sequential data rate by using parallelism (which is why sequential data rates on modern LTO formats are so much higher than for disk drives - they effectively read/write 16 tracks in parallel).
So the large format drive is dying out for good reason. Already 3.5 inch drives is in slow decline as the access density (IOPs per TB and total device read time) inevitably worsen with increased areal density. (It's inevitable as capacity increases linearly with areal density, sequential data rates to the square root of areal density and IOPs are essentially fixed at a given RPM).
Re: Smaller is better...
"Going back rather further, hard drives could be as large as 14 inches."
14 inches? You kids of today.
I remember the day.....ZZZzzzzzzzzzzzzz...ZZZZzzzzzzzz.....
when 60 Mb involved about five 24 inch platters in a large plastic dustbin lid type of holder otherwise full of normal honest to goodness machine room air. Hermetically sealed, you say boy? Hermetism was ILLEGAL in those days, I tell you......
Re: 3.5" drives
presumably engineering the spinny bits gets harder the wider and heavier the platters
I think you're on the right track (no pun intended). Larger disks have higher moments of inertia, which is a measure of how hard it is to turn (or stop, once it's going). Higher moments of inertia require more powerful motors, longer spin-up/spin-down time and have poorer speed control response. Also, as you move out towards the edge of the disk, the speed of the platter relative to the head is proportional to the radius (obviously it travels a distance of 2pi r every revolution) so another limiting factor will be the speed at which the read/write heads can encode/decode information. If the platter is moving too fast, either the electronics won't be fast enough to keep up or the arc length needed for writing a block (or convenient unit) of data at the speed the heads can manage will be too long to justify simply increasing the radius indefinitely (IOW, areal density will eventually scale in proportion to 1/2pi.r once you reach the limit of the head's en/decoding circuitry).
Thirdly, larger disks are more susceptible to vibrations and wobbles (perhaps due to imperfections in the manufacturing process). The disk heads have to float on a cushion of air (ground effect, I think it is) and you increase the risk of head crashes as you scale up the radius and angular momentum. As the disk is effectively a big gyroscope, it resists changing its pitch if the drive is tilted (dropped), whereas the disk head and mounting doesn't have a similar moment of inertia, so again it's going to cause torsional stresses and more possibility for head crashes if the whole disk pitches or vibrates in the wrong way.
Re: 3.5" drives
Because kinetic energy = .5 * m * v ** 2; and as the disk gets physically better, the outer edges are doing more metres per second for the same number of turns per second. And this carries squared weight.
I think I'd rather have a conventional drive full of Helium than a 'shingled' drive with overlapping tracks...
You can't hermetically seal in helium. Or keep it out either. Anything with a vacuum gradually acquires helium.
So I wonder how long a Helium filled drive lasts?
easy then ... enclose the whole thing in a sheet of Radium metal, and the alpha decay will replenish the helium. Plus it might glow in the dark, what's not to like ...
Even if HAMR and SMR won't play nicely together, surely helium could still be used to cram SMR platters more closely together, giving the benefits of both: 7 platters instead of 4 in the same space, so somewhere around a 10 Tb drive with the combination of current technologies?
RAID rebuild times are getting insane, though: because a drive twice the size still takes twice as long to read/write fully, you can get big arrays now which would take multiple days to rebuild fully. During which time, of course, you're at risk of a second drive failing during that rebuild cycle - and requiring a rebuild of its own, if you're using double-parity. Suddenly, that triple-parity stuff doesn't seem so paranoid after all...
"Filling a disk drive enclosure with helium, a much lower friction environment than plain old air"
Then why not just use a vacuum?
Then why not just use a vacuum?
The air forms a cushion that lets the head get close, but not too close to the platter. It will dampen a small amount of vibration and avoid the head crashing into the platter. At least, that's as I understand it... I think that I've read somewhere that one of the challenges of using helium is that it's going to leave the head closer to the platter because it's not as viscous as air, so they have to work with lower tolerances.
Re: Then why not just use a vacuum?
Here's a link to a forum thread here on the Reg where the "why helium?" question was discussed already...
why not just use a vacuum?
The head "flies" using I think the Bernoulli effect. It's a very very small ground effect aircraft. You'll find that "aircraft" can't fly in a vacuum. It's more stable than any other method of having the head very close but never actually touching.
It puts the term "Head Crash" into perspective :)
"On-premise data centres will also need bulk, online disk storage, with RAID rebuild time a continuing problem"
I keep seeing this... time isn't the issue. Failed rebuild is (obviously). The straw man is an additional
drive failure while rebuild is taking place. That's extremely rare. The problem in a RAID5 rebuild is
a bad block when rebuilding, tits-up at that point. To get around this, RAID6 is the answer for most.
But now we veer off into the more painful write penalty of RAID6, throw a lot of drives at it and the
pain lessens (simplification). MTDL for RAID6 is 110 years (google: intel raid6 paper). Which has
one scratching their head when you read about ZFS triple parity RAID. Still trying to figure out why
triple parity. Finally, SMART tech has most drives undergoing pro-active replacements making RAID5
less of a risk. But RAID5 still scares me. I've seen or heard of too many RAID5 failures on rebuild.
Re: RAID rebuilds
I've seen too many NAS units fail to trust anything on any device.
RAID isn't a backup.
A NAS rsync'ing to another NAS is a backup.
A NAS rsync'ing to another NAS offsite AND you actually get a success/unsuccessful email is a better backup.
Re: RAID rebuilds
sub disk distributed raid
I can't believe we're still using spinning disks of metal that have a limited storage lifespan...
Surely there is something better out there for storage that will fit within a 3.5" drive bay?
it's called erasure-coding
sheesh, nobody sane uses RAID anymore. At >2TB raid6 is a must and none of FB/Amazon/Google use RAID. Erasure coding (eg. 7 choose 15) pays a footprint penalty of 2x the original data but has HUMONGOUS better resiliency against loss (I only need 7 intact pieces to reconstruct the data). Yes the maths cost CPU but really, CPUs are stupid fast so nobody cares about that cost.
It ain't only FB trialing WD's drives in their cheaper and faster than tape cloud storage solution.
- Product round-up Ten excellent FREE PC apps to brighten your Windows
- Review Tough Banana Pi: a Raspberry Pi for colour-blind diehards
- Product round-up Ten Mac freeware apps for your new Apple baby
- Analysis Pity the poor Windows developer: The tools for desktop development are in disarray
- Chromecast video on UK, Euro TVs hertz so badly it makes us judder – but Google 'won't fix'