back to article Disk areal density: Not a constant, consistent platter

A disk's areal density varies according to where you look on a platter. This writer had thought it was constant cross a platter but, wouldn't you know it, it's not. The data recording technology can provide a uniformly dense layer of potential bits across a disk platter's surface but differing track lengths and radii, and head …

  1. Andy The Hat Silver badge

    Memories ...

    I haven't seen a disk platter like the headline pic in years ... oh the memories of the smell of oxide dust in the disk pack :-)

    1. FartingHippo
      Megaphone

      Re: Memories ...

      HEAD CRASH!!! Run for the hills!

    2. Christoph

      Re: Memories ...

      Hoiking the things out of the washing-machine sized drives every day to run the backup.

    3. wilber

      Re: Memories ...

      And the calls in the middle of the night to come and replace and align the heads. :-(

    4. Mike Arnautov

      Re: Memories ...

      I still have one of those crashed platters. Yougsters often refuse to believe me that the platter's total (i.e. two-sided) storage capacity was about 31MB. :-)

      1. pierce
        Coat

        Re: Memories ...

        first disk drive I worked with had a single platter cartridge of 1 megabyte total capacity on both sides. actually, it was 510,000 words (16 bits each) but that's close enough to 1 MB as what won't matter. This platter was 12 or 14 inches in diameter, and spun at the blinding speed of 1500 RPM, and one of these disk packs held the whole operating system, complete with a FORTRAN compiler, Assembler, and other programming tools.

        1. seacook
          Pint

          Re: Memories ...

          Friday night. Head crash in drive 2. Operator needed to access data on disk pack and moved disk pack to drive 3. Unable to read. Repeat drive 4 then drive 5 then drive 6 and finally the last drive in the string drive 7. Of course the disk packs originally removed to enable installation of crashed pack from drive 2 in each of the other drivers were replaced into their respective drives. The operator realized that there was a major problem! Finally called for help.

          Evening destruction count: 6 X 50MByte disk packs and 43 heads across 6 drives

          Pizza and lots of beer next day after premium hour call out to repair all the issues.

          1. tirk
            Happy

            Re: Memories ...

            Glad I'm not the only old fart here ;-)

            The joy of optimising COBOL sort steps by seeing how much the disk drives (damn, forget the number!) shook on the old 360/40. Don't get me started on punched cards....

    5. SeanExablox

      Re: Memories ...

      Andy The Hat,

      This is a picture from Exablox's office. We keep them displayed to do exactly that, bring back memories of the good 'ol days.

  2. JeffyPoooh
    Pint

    Duh...

    The RPM has to be constant (obvious). Therefore the linear speed of the disk surface is going to be a smooth function of radius.

    The electronics will have several modes (not an infinitely smooth adjustment), and they'll have to 'switch gears' at certain points.

    Therefore the areal density has to be variable across the surface.

    The only way it could be held constant would be if the electronics could smoothly adjust to match the physics. And that's obviously not practical when working at the bleeding edge of what's technically possible. It would be practical to be smooth and continuous with low speed garden variety electronics, but not when using the sort of hardware ASICS and gate arrays that would be necessary at these data speeds.

    This is all perfectly obvious. Why would anyone assume different?

    One can focus on the peak areal density and then mention the typical special efficiency that can be achieved due the limitations of technology.

    1. Roo
      Windows

      Re: Duh...

      "This is all perfectly obvious. Why would anyone assume different?"

      That was my initial thought - based on reading the manual for a CDC Wren hard drive.

      However it is technically possible for a drive to handle a variable aerial density either through signal processing or adjusting the motor speed (Compact Discs have been doing this for donkeys years).

      Varying spindle speed would probably be pretty dumb for random-access drives - so my money would be on varying bit rate to maintain near-constant aerial density. I have a feeling Fujitsu Eagles may have done that trick - but I could be confusing them with something else - it's been a long time since I delved into hard drive schematics. :)

    2. Lusty

      Re: Duh...

      Actually I'd say we're at the point where that would be an easier problem to solve than increasing areal density further. If the max and average are that far apart then there is a significant amount of untapped storage capacity which is potentially a clever firmware upgrade away. something being hard doesn't make it impossible, and given how hard HDD manufacturers are finding shrinking the bits this seems the easy way out to me until they go flash.

    3. nerdbert
      Holmes

      Re: Duh...

      Actually, if you look at the supplier's chips, they're pretty close to "infinitely smooth adjustment". The frequency selection is approximately a small 1% increment over the range of the SoC in question, and the better SoCs can do from 100MHz to 3+GHz.

      Where the AD jumps occur is actually 2 places: the read zones on the drives, and the servo. In general, most drives have 20+ read zones, where the frequency of the data on the disk is fixed. There's a tradeoff between making tons of zones and optimization time, as well as SoC frequency switching time as you switch zones. But the more zones you have, the more efficiently you can pack those bits since as the diameter increases you can change the frequency to keep the linear density constant.

      But where the real difference in AD occurs is really in servo. For many drives, zero is a fixed frequency from inner diameter to outer diameter. Zoned servo is relatively rare, so in general there's a huge penalty in AD as you go to the outer diameter and the servo wedge gets very large as compared to the read you lose a ton of AD.

    4. User McUser
      Boffin

      Re: Duh...

      The RPM has to be constant (obvious).

      Well no, it doesn't *have* to be. Disks can either be CLV (Constant Linear Velocity) or CAV (Constant Angular Velocity.) In the former, the disk's rotation rate slows as the read/write head moves towards the outside of the platter. I doubt this would work very well in drives with more than one platter, which is probably why nobody does it anymore (AFAIK.) CAV is likely to be a lot cheaper too.

      1. nerdbert
        Holmes

        Re: Duh...

        DVDs and CDs are CLV. HDDs aren't CLV because the acceleration and settling and is too hard a problem for something that's random access. As slow as HDD access is, it'd be far worse if you had to change spindle speed as you changed radius. Even drives with one platter would suffer horribly if you tried to do CLV. There's a reason skipping segments/songs is so slow on DVDs and CDs...

      2. Neoc

        Re: Duh...

        Damn, beat me to it. Have an upvote.

      3. JeffyPoooh
        Pint

        Re: Duh...

        @U MU

        Two points:

        How many horsepower would you need to adjust the RPM within the Random Seek Time requirement? Or would you like to adjust the Random Seek Time requirement from milliseconds to seconds? That's obvious, in my opinion.

        The HDDs I've seen with multiplatter had all the heads moving in unison, one voice coil. In case you were assuming different.

    5. Ole Juul

      Re: Duh...

      This is all perfectly obvious. Why would anyone assume different?

      Probably because they didn't grow up with floppy disks and MFM.

      1. JeffyPoooh
        Pint

        Re: Duh...

        @OJ

        I still have a gadget to hole punch / notch the single sided 5.25-inch floppy so it could be flipped over to use the other side.

  3. jason 7

    Short strokin'!

    Love my first 100GB on a single platter 1TB drive.

  4. Gunnar Wolf
    Holmes

    Been that way for many many years

    As there's quite a bit of logic to using shorter angled sectors on the external tracks of a disk, this has been in use since the 1980s. Originally it was quite expensive, and each area of the disk had to be rotated at a different speed (i.e. the original Macintosh floppies), but it is now achieved with heads that are able to read/write at different linear speeds.

    I'm linking to this Wikipedia article as I contributed some bits to it ;-)

    https://en.wikipedia.org/wiki/Zone_bit_recording

  5. Anonymous Coward
    Anonymous Coward

    Doesn't everyone who works in storage know this?

    Otherwise you'd miss out on optimizing storage by whether LUNs were placed on the outer (faster) or inner tracks. The sequential read/write performance is about 2x different from the innermost to the outermost track, so this is definitely something worth paying attention to!

    Never wondered why EMC's tools specifically document that LUN assignment starts on the outer tracks and works its way inside? If they were all the same, no one would care...

    1. Nate Amsden

      Re: Doesn't everyone who works in storage know this?

      Maybe I'm missing something here but one revolution of a platter on the outside tracks will pass over a ton more bits than the inside tracks(more surface area), I don't think it has much to do with how dense it is, I suppose density could play a role if the inner tracks were much more dense than outer but the gist from the article I get is they don't have that much control over it, some tracks are dense others are not sounds kind of random.

      I lay my 3PAR storage out the opposite way, new data goes to the inner tracks and work their way out, to sort of ease the burden of a system as it fills up, or something.

      1. Anonymous Coward
        Anonymous Coward

        Re: Doesn't everyone who works in storage know this?

        I'm not sure how what you're saying conflicts with what I'm saying. I'm not suggesting the outer tracks are more dense from a bits per inch standpoint, but they are denser from a bits per track standpoint. The outermost tracks are bigger/longer (2π * r and all that) so a ZBR drive writes more sectors in them. Thus, the head is passing over more data per revolution in the outer tracks, meaning faster read/write performance in the outer tracks. Significantly so; definitely worth optimizing for, especially with sequential I/O.

        1. Lusty

          Re: Doesn't everyone who works in storage know this?

          "Significantly so; definitely worth optimizing for, especially with sequential I/O."

          I'd disagree. It's a very small number of drives required to saturate the SAS connection, iSCSI/FC connection or even PCIE bus. Optimising for sequential IO by doing this doesn't buy enough difference unless you're really, really constrained by power, cooling and space. There is a lot of very sciencey sounding stuff about short stroking but a decent storage guy can easily design a better solution by other means.

          What you're saying was true with ye olde large format drives, less so with 3.5" and almost irrelevant with 2.5" since the reduction in form factor has allowed number of drives to increase, power to decrease, and has meant there is considerably less difference between inner and outer tracks. Anyone who needs to write sequential that fast but has a requirement which has low enough capacity requirement to mean short stroking is viable should by now be using SSD arrays. Anyone needing the capacity to require large 3.5" drives can't afford to lose the space to short stroking.

          1. Anonymous Coward
            Anonymous Coward

            Re: Doesn't everyone who works in storage know this?

            I wasn't talking about short stroking, where did you get that from?

            1. Lusty

              Re: Doesn't everyone who works in storage know this?

              You're talking about using the outer tracks of a drive - that is known as short stroking whether you're doing it for throughput or latency reasons. As I explained though, neither is a useful technique with modern hardware since there are many ways to achieve throughput and latency in a less wasteful way.

    2. Lusty

      Re: Doesn't everyone who works in storage know this?

      Short stroking is very outdated these days, and generally it's only the "traditional" vendors doing it because it makes them sound clever (they aren't). The real clever people have already switched to flash which is many thousands of times faster and offers true consistent low latency. Some vendors even support short stroking and long stroking on the same drive for different LUNs which offers essentially zero benefit.

  6. Martyn 1

    A different type of "disk crash"

    One night I saw a stack of those removable packs fall off the back of a disk drive while the "washing machine" was on "fast spin" and the vibration caused them to crawl off the back. They belonged to another company who shared our DC (but didn't run a night shift), I left the packs on their ops managers desk with a note taped to them warning him not to try and load them into a drive :-)

  7. prof_peter

    Of course they're complicated

    Unlike ancient drives with fixed numbers of sectors per track, and thus varying areal density, current drives with ZCAV (google it) try to achieve constant areal density by varying the number of bits on each track. Since bits come in 4KB sectors you're never going to achieve true constant areal density, although at 1-2MB per track, +/- 4KB isn't a big difference.

    However it's quite possible that what was being referred to is the difference in areal density between *heads* (or equivalently, surfaces) in a modern drive. Different heads in the same device can have wildly different (e.g. 25%) areal densities - google "disks are like snowflakes" for a very interesting article from Greg Ganger and the CMU PDL lab on why this is the case. A simple test in fio can show this behavior - e.g. latency of 64KB sequential reads across the raw disk LBA space. You'll see the disk serpentine pattern, with different speeds for each surface.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like