If I had a HAMR
I'd be very contented indeed, morning, noon or night.
Seagate plans to demonstrate a coming HAMR disk drive in Tokyo next month. HAMR stands for heat-assisted magnetic recording and is a way of increasing a disk drive's capacity beyond the limits of current perpendicular magnetic recording (PMR) technology. At high areal densities PMFR technology breaks down because each bit …
I'd be very contented indeed, morning, noon or night.
Surely if your putting all that technology into a drive you want it to be as large as possible. I can't understand why the Quantum Bigfoot drives died out.
Two things spring to mind.
Space in datacentres, for one... power usage for another (spinning up a small disk needs a smaller motor, after all).
A big 2.5" is 1TB. A big 3.5" is 4TB. To compare space and power, compare four 2.5's to one 3.5. It think the reason HAMR is starting at 2.5" is they are starting with a small, high-margin market because there will not be much production capacity to start with, and the cost of a recall will not be a disaster. I would much rather send three apologies and three trucks of new drives to Google, Facebook and Amazon than send 100,000 disks to 50,000 different commentards.
I am sure a big slow 3.5" will arrive when manufacturing has scaled up. In the mean time, shingle is a more obvious choice than HAMR if high speed is not required.
Latency is a big factor. A 2.5in 10K drive has a lower access latency than a 15k 3.5in drive. Plus you get more drives in the same amount of space. Data centers can be huge and that small difference in space can add up quick.
The 2.5in 20k disks are quickly becoming the standard for high performance magnetic storage. The cost is way lower than enterprise grade SSD and the performance is still quite good for many applications. Throw some SSD caching in front and you can get close to pure SSD performance with a much lower cost per GB.
Also having more/smaller drives it makes it easier to replace the drives as needed. And, it takes a lot less time to get a multi-disk device back on-line in case a drive goes bad when each disk is smaller. How long would it take to rebuild a RAID consisting of terabytes versus one consisting of gigabytes? Smaller = less downtime for any given multidisk device and when a device is down it ain't earning any money for its masters.
Did they finally launch and I missed them? They certainly aren't becoming a standard if I've never heard of them and Google can't find any evidence of actual products.
BTW, there is no such thing as a 3.5" 15k rpm drive. They are in 3.5" form factor, but the actual platters are only 2.5", presumably because spinning a 3.5" platter that fast caused problems (probably the same problems that stopped them from going to 20k rpm)
Even if 20k rpm drives do exist now, I fail to see why you'd want them as a storage tier. It is the same problem 15k rpm drives have fitting in. Those offer maybe 5-6x more IOPS per GB, with only 2-3x better IOPS per spindle and essentially no IOPS/$ advantage, so you could get a lot of the IOPS/GB benefit by simply short stroking a 7200 rpm drive. SSDs offer 2-3 orders of magnitude more IOPS per GB. Having two tiers as close as 7200 rpm and 15k rpm doesn't make sense with the third almost equally far away from both.
You get your best bang for the buck using a 7200 rpm bulk tier and an SSD performance tier, at least that's what all my testing (conducted a couple years ago on a Vmax using multiple workloads) clearly demonstrated.
"I can't understand why the Quantum Bigfoot drives died out"
They were fragile as hell, slow and not particularly reliable even when tended lovingly. I have a lot of other drives left kicking around from that era but no Bigfoots ever made it to the 5 year mark.
"In the mean time, shingle is a more obvious choice than HAMR if high speed is not required"
If you think disks are slow to write now, wait until you have to overwrite sectors in a shingled setup.
I won't touch them for home or work use, even with someone else's pargepole. There's just too much to go wrong when you're partially overwriting adjacent tracks.
"Those offer maybe 5-6x more IOPS per GB"
The _ONLY_ advantage that spinning faster gets you is faster sequential reads. Anything involving random headseeks is only fractionally faster than previously, but you lose a fraction more time waiting for things to settle.
That's why Seagate and others are looking at hybrid storage with flash write caching.
Wrong. Spinning faster reduces the latency for the desired data to get under the read head. In the worst case, if you complete your seek just after your data passed under the read head and have to wait for an entire rotation, you must wait 8.3 ms (4.15ms on average) on a 7200 rpm drive. On a 15k rpm drive those numbers are halved.
More importantly however, faster mechanicals are used in the high end 15k rpm drives so the seek time itself is nearly half what it is for a typical 7200 rpm drive. Nothing stops you from making a 7200 rpm drive with similar seek times, but no one does because the market isn't there.
The net effect between the faster seek, reduced rotational latency, and the smaller capacities of 15k rpm drives is that you typically see 5-6x more IOPS per GB on a 15k rpm drive than you do on a 7200 rpm drive. It has nothing to do with the faster throughput on the 15k drives.
I meant 10k, it was just a typo. I agree that SSD with 7.2k drives probably yields the best IOPs/$.
Seagate had me cheering right up to the point where they said they are developing a new fast drive for enterprise blade servers. Despite their bluster, that market is heading to flash fast, simply because of raw performance.
A new 10K RPM drive aimed at being a market segment leader doesn't make any sense. It's like GM announcing today that they've just developed the world's biggest steam-driven truck!
Well..Not exactly..The SSD drives have entered the enterprise like a storm in the last few years..All the Big boys are using them and have fast SANS..HDS, EMC, HP, IBM etc..
you must consider the mix and use in SANS and HighCap systems of different disks. Many boxes still have lots of drives that are going to last for years to come..Backups and archived data from Oracle ERPs or SQL tables. You off load it to 10k lost cost drives.. You use for 15k drives for network shares and Databases.
Many are finding hypred drives offer low cost with a large high speed cache to move dynamic data almost close to a SSD with a fine tuned cache and effective load balancing.
There is a flaw with nand drives..In large Corps that are making millions of read and writes a sec. There is degragation with the memory. A normal user won't see it probably over years.. But these chips are just being slammed 24/7 with data and they wear out and make errors. There is a lot of work going on in that field.
Now..A few years ago when we started SSD in our products the contract changed to designate SSD's are a commodity part unlike when we sold professional high speed SCSI drive for service and repair/replace
in a few hours worldwide. A customer will pay a premium price for that stability and availability.
If the SSD starts wearing down and make erors..as much as those dang things cost.. All you can do is pitch them in the trash. a comodity...
A steam-driven truck could actually be a huge success. ;)
To all you COD nerds out there the HAMR with a thermal site and smoke grenades is remarkably effective on Nuketown.
systemd'oh! DNS lib underscore bug bites everyone's favorite init tool, blanks Netflix
Biting the hand that feeds IT © 1998–2017