Feeds

back to article How many terabytes can you fit on a 2.5-inch hard drive?

Can we expect 2.5TB 2.5-inch hard drives and 5TB 3.5-inch drives by 2012? It seems realistic if the claims of hard disk drive toolmaker MII, Hitachi GST, and others are realised. To reach these levels, platter areal density needs to increase and read/write head capabilities also need to improve. Current areal density mass …

COMMENTS

This topic is closed for new posts.
Happy

2.5TiB on a 2.5" disc? Pah!

Interesting article. I hate to add to the SSD hype but I did a very rough yet conservative back-of-MS-Calc calculation of what an 2.5" SSD could possibly hold using existing technology.

I have for my phone a 8GiB MicroSD card. These things are tiny - about 1cm^2 by 1mm thick, even with the casing and contacts. Astonishing. By scaling the volume up to that of a 2.5" disc, I came to a ballpark figure of 12TiB - huge for such a modest volume and a fair bit more than I can see a future HDD fitting! I imagine there are, of course, many reasons why you can't simply fill a disc with MicroSD cards, but it would appear to me that, as far as densities and capacities go, the future is likely with the SSD?

0
0
Stop

It's a question of speed

We've seen massive increase in storage density, but speeds haven't kept pace. It's all very well if drives hold 10TiB, but not if it takes a day just to transfer the entire contents to another location, a backup for example. I'd rather hear about manufacturers working on the speed issues than increased storage density. 100Gbit/s transfer speeds without just speeding up platters would be a good place to start.

Realistically can traditional mechanical drives provide this? I feel like all these press releases and discussions about density are just an attempt to blind us from the real questions, such as that of speed and power consumption. If those don't change then increases in size are worth much less than they would appear.

0
0
Silver badge
Dead Vulture

$$$$$$$$ IS a title

Properly implemented, SSD may be able to win both the performance and capacity crowns, but currently their performance (especially per watt) can be dismal, and with a cheap 64GB [i.e, GiB] SSD costing ~$140 vs a low-end 120GB [~=111GiB] HDD coming in at ~$40 (which puts the prices at ~$2.19/GiB and ~$0.36/GiB respectively), which do you think will continue to be the standard for the near-to-mid future?

SSD will only dominate the market if and when the manufacturers implement intelligent controllers to get performance to where it should be AND bring the price down to no more than 2x the cost of HDD.

I see the net effect of the technologies mentioned in the article to be pushing that eventuality out into the distant future.

0
0
Silver badge
Coat

"Areal"?

Is someone from Bristle (Bristol)?!

0
0
Thumb Up

All this capacity...

...will require some serious file system like ZFS to ensure that data is not lost. Sufficient redundancy, block checksums, regular data scrubbing to detect and correct latent errors like bit rot etc. Luckily ZFS will do all of that, and it's already here, and it's free and open source too. Here's a gizmo I made earlier:

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/

0
0

@AC

>the future is likely with the SSD?

Im sure Sandisk et al would like this to be the case but current tech has proved to be a little disappointing in terms of performance and longevity. Its early days though, if there is a market demand then no doubt they will solve these problems eventually.

Heat-Assisted Magnetic Recording ? Sounds a little like the old Magneto-Optical drives !

0
0

@It's a question of speed

"""100Gbit/s transfer speeds without just speeding up platters would be a good place to start."""

I really wonder what you'd do with that sort of storage bandwidth, seeing as how (dual channel) DDR2 800MHz does about 6.4gbit/s under ideal conditions. And a 16x PCI Express slot tops out at a theoretical 32gbit/s.

Oh yeah, and the fastest sata ports currently planned can only do 6gbit/s.

You should also know that the only way to speed up a large sequential transfer (Like your day long backup example) is to increase density, which seems to be the very thing that you'd rather they don't work on at all. The problem, which you noticed, is that speed increases relative to linear density (bits under the head per rev) whereas storage capacity increases relative to areal density. That means that capacity will increase exponentially faster than speed for rotating magnetic storage media.

The good thing about increased density is that smaller drives get cheaper, and you can put them into an array, where you can actually make them significantly faster, given that you know what you're doing.

But there's no way you're going to get 100gbit/s on anything at all short of some supercomputer interconnect buses. I believe that the fastest off the shelf connections that you can guy (for a whole lot of money) are still limited to about 40gbit, although I don't keep track of those and they may have increased recently.

0
0
Bronze badge
Alert

More expensive storage may be an advantage

I wouldn't mind seeing storage technology stall and not go over today's densities for quite a while, that would mean very large storage arrays become much more expensive. Without the prospects of almost unlimited amounts of cheap storage, governments would not be able force through draconian laws mandating the indefinite storage of all communications data.

0
0
Flame

@Nexox Enigma

I think you missed the point there. Of course you can't get anything near that level with current gear, but that's why I'd rather they worked on it. 100gbit/s was a figure pulled mostly from the air, although in relation to the rate at which storage density has increased, 33x current theoretical maximums of the Sata 3.0 spec isn't such a giant leap. It might even be an impossible figure, but then if that's the case do we really want 10Tb drives instead of 10 1Tb drives?

Sata 6.0 is a step forward, although given that most drives physically can't achieve 1.5Gbit/s it's of little use! 2x just isn't that great a leap forward which will free us of the tiresome waiting while files are transferred about.

Having endless amounts of space on a signal disk is a hindrance if all that data can't be moved around at a reasonable speed. We're not talking about processing that data, just getting it one from storage device to another this side of the next century. Yes it's possible to push up speeds through the use of server level kit and arrays, but that's of no use to the average user who just wants to know why it's taking an hour to copy over a few Gb of files (never mind Tbs).

Maybe SSD is the answer after all - the current fastest SSDs are already 2x faster than the fastest HDD and the technology is in it's infancy. SSDs are the only consumer level devices capable of speeds above Sata 1.5.

0
0

Speed is definitely as issue

Believe me, I'm filling up my brand new 1TB hard disk right now and it's taking an age and a day. Probably be end of next week until I've had time to transfer all my DVDs n stuff.

10TB is all well and good but if the read time sucks ...... it sucks

Some time in the future, data will be stored on the spins of electrons in some fancy material. Then we'll be looking at areal densities of PB+ / sq in. But if the IO sucks......

0
0
Pirate

Typical Crippleware for early adopters..

It seems that todays SSD drives are 'Lame' perhaps this is by design? there is no reason to bring a perfect product to market immediatly if there is money to be made from 30 getting better revisions... HDD will continue to be the standard for the time being, those with more money than sense will float the SSD Bloat, until such a time that SSD's become useful and show there true potential, until that point crippleware and waste will be the standard.

0
0
This topic is closed for new posts.