back to article Seagate plays disk cricket with a 12TB Enterprise Capacity drive spinner

Seagate has announced a 12TB helium-filled data centre disk drive, catching up with WD's Ultrastar He12, and providing both SAS and SATA interfaces. The disk represents an update on the existing 10TB Enterprise Capacity drive and comes in 10 and 12TB capacity points. It features a 256MB cache compared to the prior 10TB …

  1. Dwarf Silver badge
    Coat

    Cricket drives

    Never been a fan of drives making cricket sounding noises as it generally means my data has gone to the great big bit bucket in the sky.

    I'll get my coat, its the one with the screwdriver in the pocket.

    Seriously though - its a good spec with that capacity and cache !

  2. The First Dave

    Of course this drive is lighter - its full of Helium.

    1. Anonymous Coward
      Anonymous Coward

      Escaping Helium?

      It would be interesting to know if they guarantee the same % Helium content in 5 years. What's the leakage rate on these? We haven't gone past 6TB yet just for that reason.

  3. theblackhand

    Missing the important question...

    How many of these do I have to use before my data centre floats away?

    1. Haku

      Re: Missing the important question...

      You want to create a literal cloud storage facility?

    2. Oengus Silver badge

      Re: Missing the important question...

      How many of these do I need to get all my data centre staff talking like Minnie Mouse?

  4. Anonymous Coward
    Anonymous Coward

    Fixed disks, fixed pricing more like.

    Of late, both Seagate and Western Digital seem to have fixed pricing to multiples of the cost of a single 1TB 3.5" drive.

  5. Missing Semicolon Silver badge

    And how log will they last *really*

    Since a depressingly large number of drives-I-must-throw-away-when-I-get-the-data-off seem to be Seagates....

    1. Anonymous Coward
      Anonymous Coward

      Re: And how log will they last *really*

      2.5 million hours MTBF

      How are they allowed to peddle this crap?

      There are 8766 hours in a year. If I have a large pile of these drives, are they really warranting that each drive will fail after (on average) 285 years?

      It's not my experience with Seagate.

      Typically 3% will fail in the first year, and I doubt many would be still spinning after 20 years.

      1. Rob Isrob

        Here ... let me google that for you.

        http://www.zdnet.com/article/mtbf-afr-and-you/

        1. Anonymous Coward
          Anonymous Coward

          Re: Here ... let me google that for you.

          > http://www.zdnet.com/article/mtbf-afr-and-you/

          Based on that table, an MTBF of 2,000,000 hours translates to an Annual Failure Rate (AFR) of 0.44%: which means that if you have 1,000 drives then 4-5 of them will fail in the first year. Fair enough.

          In practice, the AFR increases rapidly with age as the drive wears out. So this MTBF reflects only the *initial* failure rate for a drive in its first year of operation.

          Hence, what they're actually saying is, the mean time between failures is 285 years, but only if you replace the drive with a brand new one every year!! (*)

          (*) Or perhaps every three years, if they're warranting the AFR will remain that low for that long.

    2. David 132 Silver badge

      Re: And how log will they last *really*

      Yep. Seagate have been on my "avoid where possible" list ever since the event of putting a 60MB 2.5" EIDE drive into my Amiga 1200, only to have it go clunk-clunk-death about a week later. Every so often in the years since, I've tried their drives again - with depressingly consistent results.

      Ditto Western Digital for very similar reasons.

  6. M. B.

    I can't wait to see...

    ...the look on my customers faces when I have to tell them what the expected rebuild time on an array of these will be. Probably measured in days or even in weeks instead of hours. So many people telling me that "performance doesn't matter, we just need capacity", that's all well and good but in the event of disk failure, how much risk are you willing to accept that another disk or two won't fail during the rebuild window, especially when you are waiting 3 days for the operation to complete? Especially since it's still just a 7k spindle.

    OTOH, these'll be great in a Data Domain or similar system. "What is your retention policy?" "EVERYTHING. FOR ALL ETERNITY."

    1. YourNameHere
      Holmes

      Re: I can't wait to see...

      What is the rebuild time when a drive goes out? It it actually 3 days? I know the 5T drive I just put in at home was close to 8 hour format time. I could easily believe its a multi day rebuild time if just one drive goes out in array.

      1. M. B.

        Re: I can't wait to see...

        Really depends on the workload on the array at the same time. I had an older NetApp FAS3140 poop out a 2TB 7K drive and it took 26 hours to rebuild.

      2. TheVogon Silver badge

        Re: I can't wait to see...

        "What is the rebuild time when a drive goes out? It it actually 3 days?"

        How long is your piece of string?

        Depends on what RAID type, how many disks, controller interface and performance, loading, rebuild priority, etc, etc. Put it this way, RAID 5 is probably not a good idea...;.

        1. Alan Brown Silver badge

          Re: I can't wait to see...

          "Put it this way, RAID 5 is probably not a good idea...;."

          Nor is RAID50 - just ask KCL

    2. Anonymous Coward
      Anonymous Coward

      Re: I can't wait to see...

      "...the look on my customers faces when I have to tell them what the expected rebuild time on an array of these will be. Probably measured in days or even in weeks instead of hours."

      RAID10 solves this nicely. 6 shouldn't be used in production IMHO, but makes a great backup array type.

      1. Anonymous Coward
        Anonymous Coward

        Re: I can't wait to see...

        I doubt even if a bank of 12TB drives would be really suitable for Raid 5 / Raid 6. The any helium losses are surely fairly equal across all disks statistically, so after several years, once one fails (due to friction related heat), there must be a good chance that rebuilding the array would take out the Parity rebuild, just from the the extensive drive accesses/computation to recover from a single failed 12TB drive.

        To rebuild a Raid 5 Parity Array, all the same bits from each drive in the array have to read to recompute the bit Parity. If you have 10x12TB disks in an array that is 108TB of Data that has to be read, to recompute the one 12TB failed disk, with any single failed read in 108TB of data, fcuking things up.

        1. TheVogon Silver badge

          Re: I can't wait to see...

          "I doubt even if a bank of 12TB drives would be really suitable for Raid 5 / Raid 6"

          RAID 6 is fine for most uses. Probability of data loss per 14:2 RAID 6 group of 12TB disks in an array is circa

          1 year

          0.0000003028723112

          2.5 years

          0.0000007571806060

          5 years

          0.00000151436063876

    3. Alan Brown Silver badge

      Re: I can't wait to see...

      "..the look on my customers faces when I have to tell them what the expected rebuild time on an array of these will be. "

      RAIDZ3, preemptive replacements.

      But customers won't do that, so occasionally you do have to pull from tape.

  7. JJKing Bronze badge
    IT Angle

    Technology wonders abound.

    WOW, we have gone from tiny 10TB HDD to a MASSIVE 12TB monster.

    One small step for technology, one giant leap for an ant.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019