back to article Google wants new class of taller 'cloud disk' with more platters and I/O

Google has shared a White Paper (PDF) in which it calls for major revisions to disk drive design. Titled “Disks for Data Centers”, the paper is unashamedly Google-centric inasmuch as it calls for disk-makers to rethink their products to suit the ad giant's needs. As the paper explains, those needs are very substantial: just …

  1. Tom 64

    Multiple arms

    I've oft wondered why HDD companies don't put multiple RW arms in their disk enclosures, there is certainly enough room in a 3.5" chassis. This would probably be the cheapest improvement to make too; for the cost of a little more logic in your firmware controller you can effectively almost double I/O.

    1. DougS Silver badge

      Re: Multiple arms

      This has been done, and didn't prove cost effective. The RW arms are not cheap, adding more increases vibration so you can't get as much density unless you move the arms in unison so one arm isn't moving while the other is trying to read/write...

      If they haven't been this work in all the years when they had tons of money to invest in new hard drive technology, they sure won't now when SSDs have taken the high end off the market and drive makers are barely able to break even.

      1. ntevanza

        Re: Multiple arms

        Multi-actuators were made redundant by RAID. It's cheaper to add another disk and stripe, which gives you the same performance benefits. But RAID has no place in hyperscale architectures. I'd like to see what more actuators would do to the IOPS/GB function.

    2. mi1400

      Re: Multiple arms

      Only sensible comment in this forum is "Bring back the Quantum Bigfoot!" ...

      To add my part... The track-zero aka circumference of platter should be as large as possible inline to bigfoot concept now the added bit & my chip in is that the spindle portion i.e. inner most track should also as large as possible. may the spindle axle remain same in diameter but the writable data be denied to several inner most tracks. .. say last/inner most track circumference should NOT be lesser than 30% of trackzero circumference. this will avoid the crawl speed reading of data from inner most tracks where spindle is rotating still no matter at 15k. Inner most 30% area of platter (think platter circumference) should be plastic not even metal to keep those shut asking about its wastage. or there should be a firmware unlock feature only those stupid will enable for themselves.

      Also there should be a firmware defrag. the world has suffered enough damage at hands of OS based defraggers and their fancies. the firmware based defragger should intelligently keep moving the hottest data towards trackzero aka outermost track/ring.

  2. dsmith

    Mr

    Bring back the Quantum Bigfoot!

    1. big_D Silver badge

      Re: Mr

      Current drives are, what? Quarter height? "Simply" go back to full height designs...

    2. CAPS LOCK Silver badge

      Bring back the Quantum Bigfoot!

      This is exactly the solution, about double the storage for the same number of parts, low speed motors for cool running but with high I/O at the outside edge. What's not to like?

  3. jake Silver badge

    I'm pretty sure ...

    I've been stacking spindles (with redundancy) for about 30 years now ...

  4. toughluck

    What's stopping them from short stroking disks that they want fast I/O from? If disk vendors offered multiple r/w arms (separate servos and logic), this could be viably done in a single chassis -- let the customer pick whether they want better IOPS or higher capacity -- on demand even.

    1. Alan Brown Silver badge

      "What's stopping them from short stroking disks that they want fast I/O from?"

      Why would you bother when SSDs do this fantastically better?

      The reasons for not having multiple sets of arms have already been discussed.

      Short version: It didn't work. Slightly longer one: When it did work it was unreliable.

      The reasons for not shortstroking are simply that you end up with a device that costs about as much per GB as a SSD (or more!) with performance far inferior to SSD (which is why SSD has eaten the 10-20krpm market too)

      If the chocolate factory put in an order for a couple million devices they can get them customised any way they want, but...

      1: They won't be cheaper than commodity items (they'd need to order 10-100 times that many)

      2: HDDs aren't (practically) going to get any denser. All the R&D labs are closed. HAMR is end of the line and the extra head size for that has proved to outweigh the platter density gains.

      3: All the platters come from just two manufacturers (possibly down to one now) and all the heads come from another manufacturer. Both of them have been coordinating years in advance with drivemakers to ensure economies of scale in manufacturing. The Thai floods were the warning shot that HDD supply chains are subject to nasty SPOF issues. BASF is not going to crank up a 5.25" platter line just because Google wants a couple of million drives, etc.

      4: SSDs are still getting denser/faster. 3.5" and 2.5" formats are legacy items now, just as 5.25" and 8" were in the past.

      Bigfoots were fun, but slow, fragile and even if treated with kid gloves, pretty unreliable. Having arms that had to seek over that much platter space and cope with that much variation in resultant linear velocity under the heads (which affects fly height, because it's that velocity which drives air through the venturi that lifts the head off the disk) turned out to be a bad idea at those platter densities and wouldn't be possible with current platter densities (which have a fly height less than 1/10 of bigfoot-era heads).

      It was rumoured that Quantum tried multiplatter bigfoots but that they never even got to production prototype stage due to unreliability.

  5. DougS Silver badge

    Taller disks will have worse IOPS/GB

    In today's world disks are bulk storage for stuff where IOPS is less important. So while I agree with taller disks to reduce $/GB and increase GB/rack, if they really want more IOPS/GB from disks they want the low end single platter disks...

  6. Pascal Monett Silver badge

    Wishing wells are nice things

    Google would like the butter, the money for the butter and the fridge thrown in for free.

    I am not a hard disk expert, far from, but from where I sit I don't think that what Google wants can be done. From a logical point of view, what Google wants is contradictory. Higher disks with multiple platter sizes, multiple RW heads and multiple I/O ports. That means more complexity, which means a shorter life span - if it can work in the first place.

    I think the solution is simple : multiple slim disks in RAID configuration. That way you have a maximum amount of RW heads per I/O port. Oh wait, that's the present situation. Oops.

    But I'm not knocking Google engineers - these guys are not idiots. Which means I wonder why they ask for these things. There's something more to this.

    1. Frank Rysanek

      Re: Wishing wells are nice things

      Exact same feeling by me. "I must be missing something, or the paper was written by someone from the PR."

      What would you achieve by having two different platter sizes per drive, on a common spindle? (The smaller platters would be small AND slow in terms of IOps AND slow in terms of MBps - but let's not start with that.) On Google's scale, if you know beforehand what data is hot and what is not, you can sort it in software and store it on different storage volumes built with different optimizations (slow big drives vs. fast smaller drives vs. flash or whatever). How is the *drive* supposed to know which LBA sector is going to be hot and which one not? Also, map LBA sectors to physical sectors in some hashed way, other than "as linear as possible" ? Really, at the spinning drive's discretion? Even if the drive did have two sorts of platters, fast and slow, considering that it had no a-priori knowledge of what data was going to be hot, perhaps the idea is that it could move the data to the fast vs. slow region "afterwards", based on how hot vs. cold it actually turned out to be... thus wasting quite some of its precious IOps on housekeeping! Also, it would better infer the FS layout happening at several layers *above* LBA, to relocate whole files, rather than individual sectors, as otherwise it would ruin any chance of enjoying a sequential read ever again ;-) And oh, by the way, we'll take your RAM cache, you can't really use it efficiently anyway, it's more use to our FS-level caching, thank you. Seems to me that complexity-wise, having several categories of simple disk drives (of different sizes and IOps rates) is obviously more manageable than having mechanically complex drives with hallucinogenic firmware managing nonlinear sector mapping and a fiendish tendency to try to guess your FS-level allocations and occasionally get them wrong...

      There's an old Soviet Russian children book series by Nikolai Nosov, sharing a lead character called Neznaika. Never mind the plots with often leftist outcomes/morales... I recall in one of the books there was a lazy lunatic cook, wasting a lot of time by trying to invent "dry water" in his head... The point of dry water was, that you could wash dishes without getting wet :-) This is a hell of a metaphor for many concepts and situations...

      1. John Stoffel

        Re: Wishing wells are nice things

        Smaller platters on the same spindle would be used for high(er) IOPs activity, because the head would have less distance to travel. Short Stroking in essense.

        Now if you have different size platters on a spindle (think full height 3.5" drives like the old 5.25" FH MFM-506 disks... then maybe that would work, with dual arms... not sure.

        John

  7. Rafael 1

    'But, grandmother, what big hard drives you have!' she said

    'All the better to see you with, my dear.'

  8. Dave 32

    Disks

    There are all sorts of variations possible.

    Consider, for example, a disk with 12 platters instead of the more normal number (3?). Now, that many R/W heads (23 or 24) will be a lot more massive than the 5 or 6 heads for a three-plattered disk, which would either require a strong drive mechanism, or slower access. But, who says that all 24 heads have to be controlled from the same actuator? Group 6 heads together in a group, and put four actuators around the disk. Now the drive mechanism can be the same, along with the same speed, as for the smaller disk, but you have four times the capacity out of the disk.

    There used to be disks with multiple heads per platter surface. These could produce faster access by reducing the wait time for the desired sector to pass under a R/W head.

    There also used to be disks with fixed-mounted heads. These were mainly useful for paging applications, and areas where extremely fast access was required to a limited amount of data.

    And, may other variations were done historically.

  9. Anonymous Coward
    Anonymous Coward

    DIY

    $70B revenue, virtually no tax, make your own disks, you can afford to do it.

  10. Jim O'Reilly
    Holmes

    Google challenges physics

    Whoever wrote this at Google seems to be a naive. Basic HDD technology has hit a wall. Getting more density on a platter will mean HAMR, which looks very hard to fabricate commercially. so what are the alternatives? More disks? That means helium-filled drives, but that's OK. But it also means stabilizing the spindle runout and we are at the limit for that already with current track densities.

    HGST tried dual actuator drives and dropped them. The actuates interact via vibration, and they also create turbulence that messes up smooth flying by the heads. Anyway, that only increases IOPS to 200 which isn't in the same league as SSD.

    Maybe we need to bring back the 5.25 in form factor. But wait! That has disk stability problems on the outer tracks, so it's a non-starter at current densities.

    Any way you look at it, HDD speed and capacity growth are effectively at a standstill, which makes me wonder if Google's engineer knows what he's talking about. Perhaps the idea is to bluff competitors into staying with HDD!

  11. John Klos

    Did the work for you...

    http://www.klos.com/~john/scsidrive.jpg

    You're welcome.

  12. Bernd Felsche
    Boffin

    You could pack many 2½" platters in 5¼" FHD

    Platters on a long, "horizontal" spindle into the "depth" of the form factor. It should go without saying the the spindle would be simply supported at both ends.

    Also several head actuators to access groups of platters on the same spindle.

    The "imbalance" problem is a load of bollocks. Give the design task to a proper mechanical engineer.

    HDD effective transfer rates are affected by spindle speed, actuator seek and settling times; and contention where requests for data encounter a physical conflict for resources; be that a head assembly to get to a particular group of platters or data in different parts of a platter.

    Lots of small platters reduces the probability of platter-based contention. Many head assemblies to access disparate data on different platters reduces head contention. Spinning the platters as fast as possible (>15krpm) substantially reduces the latency of access, returning data earlier and therefore the probability of encountering requests that lead to contention. Likewise; using small head assemblies reduces their individual inertia, facilitating faster seeks and shorter settling times.

    While track buffers should be retained in addition to the request and data queue space, high-level storage management such as the balancing of data across the platter groups should be left server-side.

    The potential throughput rate for e.g. 12 platters with ca 22 data surfaces spinning at 15krpm calls for better than SAS-4. (160 MB/sec*22)

    1. Nate Amsden

      Re: You could pack many 2½" platters in 5¼" FHD

      Sounds like you're smarter than the billion dollar HDD companies. Go at it and put them out of business.

  13. jslapp

    Let's use what we have 'better' so we don't have to redesign

    “The industry is relatively good at improving GB/$ [gigabytes per dollar], but less so at IOPS/GB [input-output-per-second per gigabyte]"

    A better approach than redesigning hard drives is to add the right software intelligence to fully leverage what we already have. DataCore just proved this to the world again.

    [JUST THE FACTS]

    - DataCore just achieved 1,510,090 SPC-1 IOps

    - Did this at a cost of $0.09/SPC-1 IO

    - Landed in both top-ten categories for Performance and Price/Performance (never been done before)

    - Did this at 99.95 microseconds average response time at 100% load (never been done before)

    Check out Mellor's latest article on this: http://www.theregister.co.uk/2016/02/27/revolution_in_toptier_spc1_benchmarking/

    1. Anonymous Coward
      Anonymous Coward

      Re: Let's use what we have 'better' so we don't have to redesign

      Datacore is a good example of not using hard drives as it was a performance play, not a capacity play.

      If you read what Google have been suggesting, it's all about getting more capacity out of a single spindle (for example by increasing platter count), with some consideration about having a non-shingled area within a drive so you can handle the awful downside of re-writes in a shingled world.

      Sorry to bang on, but what was the point Google was making? Full marks if you said "capacity". Null points if you said "performance" and reference a performance benchmark.

      1. btreiber1

        Re: Let's use what we have 'better' so we don't have to redesign

        This article is about more than just capacity because my colleague quoted the article directly before making his point:

        "“The industry is relatively good at improving GB/$ [gigabytes per dollar], but less so at IOPS/GB [input-output-per-second per gigabyte],” the paper says. A desire to see the industry improve both informs Google's shopping list for a dream cloud disk."

        <. . . .snip . . . .>

        And I add here another quote from the article:

        "Disks with more than one IO source is another idea Google wants realised. The paper imagines disks with more than one actuator arm, or one arm capable of reading more than one track at a time."

        Multiple actuator arms would be about performance and not capacity IMHO.

        1. Anonymous Coward
          Anonymous Coward

          Re: Let's use what we have 'better' so we don't have to redesign

          Yes - disks with more than one actuator arm each will win back the SPC title from solid state. Not.

          Adding more actuators, if commercially feasible (i.e. without costing twice as much and/or significantly degrading the areal density achieved), could slow the IOPS per TB slide we've experienced since drives have got higher capacity at the same rotational speed.

          Don't confuse "absolute performance" and "slightly less shit performance" and use it as an excuse to roll out an SPC-1 benchmark result as the answer to Google's hard drive wish list.

          Flash is performance. Spinning disk is capacity. Expensive turd polishing, won't change a hard drive and the most logical answer to providing a balance between speed and capacity in a single form factor (SSHD) hasn't been that successful (as most scale out players want to build the smarts in software and then have the flash be flash and the disk be........... cheap).

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019