The after-effects of the Western Digital and Seagate embiggening could be to slow down technology development in the hard-disk drive industry. Why spend development money when you don't need to? With WD buying Hitachi GST, and Seagate buying Samsung's HDD operation, the two big disk-drive beasts will control around 90 per cent …
The future's solid?
Surely the future is solid state? maybe the research money need to go there instead.
All the noise, vibration and heat generated by those discs spinning at 15,000 would be eliminated and performance improved. Not to mention that SSDs fail in read only mode, so you don't lose your data.
SSDs generally fail in read only mode if the issue is write failures but this does not mean that in all failure situations that you can still read your data on a SSD and in that regards a SSD is no better than a HDD and potentially worse depending on how the internal memory is architected.
Right now while SSDs certainly have a lot going for them the issue that gets overlooked frequently is that a large SSD of say 512GB is still quite small compared to the new 3TB and early next year 4TB drives that will be shipping. Looking a cost and density comparison SSDs do not make a compelling argument for people looking for large storage pools and low costs.
Yes there are improvements in SSD density coming but they are still behind the curve in this area as far as HDDs are concerned. However, if the merged disk drive houses stop innovating then they will surely be doomed. Currently disk drives are storing data at around 500Gb/in2 with the current limit seen to be at around 1Tbit/in2 though there are a number of advances in the work and Toshiba has demonstrated the ability to reach up to 2.5Tbit/in2.
Sadly pure density improvements generally provides minimal performance improvement and actually makes things a bit worse by creating fat drives that become bottle necks. Hence the rapidly evolving use of both SSDs and HDDs as a tiered solution.
Hopefully, the disk vendors will realize that simply sitting back and milking the cow stops working once the cow dies or the milk dries up and will continue to innovate and spend the necessary money in their R&D labs.
Then again who knows and maybe we will all be using crystal holographic storage systems in 10 years.
"Switching to 2.5-inch drives just delays the onset of the problem. A 3TB 2.5-inch drive will have the same interminable RAID-rebuild times as a 3.5-inch one."
Well kind-of. Right now the biggest 2.5" HDD is 900GB and the biggest 3.5" HDD is 3TB. If you assume that the 2.5" HDDs can get to 1.5TB in the near future then that's probably the transition point as you can put 2x2.5" HDDs in place of the 3.5" HDD for the same capacity but higher rotational speed (10Krpm Vs 7.2Krpm) and significantly faster rebuild time (1.5TB@10Krpm Vs. 3TB@7.2Krpm). And rebuild times aren't helped by hybrid drives, so no luck there.
The I/O density problem is massive, and is helped by hybrid technology, but only if done right. Chances are that this will have to happen in the array rather than the drive; a single drive is (ironically given the article) not large enough to provide a set amount of cache and expect that cache to be used effectively.
So SSDs for the desktop, hybrid storage arrays for businesses.
Cache doesn't help RAID rebuild
Sorry, but a RAID rebuild must actually read the whole disk, so cache is useless during a rebuild.
However, with 2TB drives, you can just use RAID 1/0 and get excellent reliability. "rebuild" is simple enough to not affect operations. Cache brings the IO density of slow drives up to almost the speed of fast drives, so much so that a RAID 1/0 with (say) six 7200RPM disks (6TB usable) should be faster than any same-cost arrangement of 6TB 15KRPM disks.
certainly behaves better in real-life applications, but you *still* need to move the whole diskfull of data to rebuild it. Given the processor speed is not a limitation, it's no faster than rebuilding RAID5.
"Imagine the truly, gob-smackingly awful RAID-rebuild times of such horrible disk drives."
Yes, a massive 10TB disk spinning at 7200RPM will still take A LONG TIME to rebuild, even sequentially. However, this does not mean that we don't NEED the capacity. If the Gb/sqin increases, more data will flow under the heads per second at the same 7200rpm speed, thus improving the drive's potential sequential read/write speed. We saw this with the jump to PMR. Placing more read/write heads on the arm will further improve read/writes (yes, they plan on doing this, just don't remember the company....). However, assuming a mundane 100MB/s to a 5TB drive, that's still 13.88hrs to fill the drive.
Then there's SSDs. Flash chip density is only limited by the die shrink and how many chips (and channels) you want to stuff into a standard casing (2.5"/3.5" or PCIe card, etc). More channels = more performance (roughly), assuming the controller and drive interface can handle it. Eventually, it might become cheaper to litho (or the like) our storage space rather than BPM a platter, but the endurance of SSDs only gets worse as the die shrinks, hence the research into making nano-levers and the like for more resilient storage.
Is a spinning platter the way forward? Likely not. Is NAND flash? Most certainly not. There are other technologies in development that are likely going to carry us out of our current rust-disk rut and hold us by until the Next Big Thing comes along. Until then, the new 3-platter Seagate 3TB drives will be a welcome product, hopefully causing some 4-5TB drives to show up in the next year or so (not due to tech, as Seagate and simply make a 5-platter drive at any time, but due to the "I want your money" factor).
- Nokia: Read our Maps, Samsung – we're HERE for the Gear
- Kaspersky backpedals on 'done nothing wrong, nothing to fear' blather
- Episode 9 BOFH: The current value of our IT ASSets? Minus eleventy-seven...
- Too slow with that iPhone refresh, Apple: Android is GOBBLING up US mobile market
- NASA to reformat Opportunity rover's memory from 125 million miles away