back to article Hold on a sec. When did HDDs get SSD-style workload rate limits?

A recent Register article reminded me that hard drive vendors are introducing warranty limits on their products for the amount of data that can be written to them in any one year. SSDs have always had this restriction; does the same now apply to HDDs? Solid state drives (SSDs) have always had this restriction because we know …

  1. JeffyPoooh
    Pint

    SSD Caching perhaps?

    It would be very tempting to include some Flash Cache on any HDD.

    If these limits are new, then perhaps something like that is the reason.

    1. Nate Amsden

      Re: SSD Caching perhaps?

      Seagate has had those for a while both laptop and enterprise

      1. TechBearMike

        Re: SSD Caching perhaps?

        The Seagate consumer drives with "Flash" cache sucked, to put it mildly. I know because I had three of them, two in a RAID enclosure. They all failed within a week of one another, just before the end of the warranty period. The replacement drives were DOA or died within weeks of arrival. So three replacement drives for nothing. Thanks for the "replacement with fully tested, refurbished products," Seagate! I wasn't going back through the hassle of another replacement cycle just to get more defective, useless product.

        The original drives had become severely sluggish, probably due to the RAM cache wearing out from excessive writes. I never saw the advertised significant decrease in boot times, faster application launches, etc. In concept this seems a good idea, but I wouldn't ever spend the money with Seagate again, not even for a regular HDD. Their product was crap and the "warranty replacement" process and products were complete failures. If you need more confirmation, look at the reviews on amazon.com (US). The product was trash.

        I'm saving up to replace all my HDDs with SSDs; no more "hybrid" drives, ever. It frightens me that Seagate's hybrid drives were ever targeted for enterprise customers. Zip drives were more reliable back in the day, and that's not saying much, obviously.

  2. Steve Davies 3 Silver badge

    sure large scale flash is just around the corner

    but how much will it cost?

    That seems to be the killer.

    Yes the price has been coming down but go above 1TB and you have to pay an awful lot for the device.

    I wonder if there is a market for large capacity flash but with not 'shit off a shovel' performance. I'd go for it.

    slower and cheaper per Gigabyte could make very large drives (should we continue to say drives?) affordable.

    1. another_vulture

      Re: sure large scale flash is just around the corner

      For HDD, you save a bunch of money if you accept slower I/O, because a slower motor is cheaper and allows cheaper I/O electronics, and a slower actuator is cheaper. Furthermore, increased parallelism is very expensive. Access electronics are very expensive because of the complicated analog circuitry needed for the magnetic signals.

      By contrast, you save almost no money if you accept slower I/O for flash. Access electronics are cheap and therefore any amount of parallelism is cheap. Throughput is limited only by the speed of interface. Read latency does have a lower bound but is thousands of times better than HDD even for very slow flash. Flash cost is driven (to a first approximation) by the number of flash bits. Large-scale flash storage cost will closely track the cost of flash chips.

      1. Alan Brown Silver badge

        Re: sure large scale flash is just around the corner

        "Furthermore, increased parallelism is very expensive."

        There's no parallelism in a HDD. Only one head is ever active at a time.

        Parallelism was tried and abandoned as "too hard" a long time ago. It's possible that WD or Seagate might revive the technology but as it wouldn't improve IOPs (only sequential throughout) there's not much point.

        1. Steven Jones

          Re: sure large scale flash is just around the corner

          There is parallelism, but not in a single device. Striping allows for higher throughput on sequential access. It also allows for more random I/Os in proportion to the number of devices, but in aggregate only. Many people have been there - thrown more disks at a workload in an attempt to get more throughput, often at the expense of leaving a lot of unused space (using short-throw mapping techniques to put all the data on the outer tracks of a disk which minimises seek latency and maximises sequential throughput).

          All this costs money and has a big power cost too, but it is parallelism. You might conceivably solve aggrefate throughput problems with parallelism, but it can never fix the latency issue.

  3. Mage Silver badge

    We need more reliable, not just larger.

    Half the capacity and four times more reliable is obviously better if space and power isn't an issue.

    Is the solution taller drives with no shingles and bigger tracks, i.e. more platters?

    1. Anonymous Coward
      Anonymous Coward

      Re: We need more reliable, not just larger.

      Agreed, it is nat as if we are pushed for space in large storage arrays.

    2. Charles 9

      Re: We need more reliable, not just larger.

      No, there's a limit there, too, as more platters will strain the spindle and the motor. Old drives in the past spun slower, reducing the forces but also lowering the performance. That's why Quantum's brief step back to 5.25" hard drives fell flat. Eventually, as noted in the article, rust is going to run in to the immovable wall of physics AND be pinched by performance demands (I can speak from experience. Mirroring 3TB worth of stuff over USB3 took the better part of a day; transferring lots of data takes an unavoidable amount of time which opens the door for reconstruction failures) that prevent larger but slower solutions.

      1. jason 7

        Re: We need more reliable, not just larger.

        Thats why I never strayed much past 1TB single platter HDDs. I see these 5+ platter beasts and I just cringe.

    3. cortland

      Re: We need more reliable, not just larger.

      OK!

      http://hackadaycom.files.wordpress.com/2012/12/worlds-most-expensive-hard-drive-teardown.png?w=800&h=450

      1. Charles 9

        Re: We need more reliable, not just larger.

        Found the article that is the source of that image.

        That's an IBM 3390. $250,000 that thing cost. In 1989 dollars. Yup, the thing is a quarter century old and held somewhere up to around 22GB, which doesn't seem much until you realize at the time, 200MB hard drives were just coming on the PC market and were no small change, either. So it kind of solidifies my point, as it's very old and very expensive.

    4. Alan Brown Silver badge

      Re: We need more reliable, not just larger.

      Taller drives = higher spec motors and more platter flutter. This won't happen and larger platters won't happen for similar reasons.

      Fly heights are 1/10 what they were when drives broke the 100Gb barrier. There's simply not the engineering tolerances available to allow taller drives/larger platters, even if drives were slowed to 3700 or 4200rpm again.

    5. Daniel von Asmuth
      Devil

      SATAN disc

      The joke is on you: these drives were designed for Windows '95: enough capacity for Joe's porn videos, just fast enough to play video and the reliability of a politician. Enterprise disc drives don't come with rate limits, are fast and the failure rate is ten times lower.

      http://www.seagate.com/www-content/product-content/enterprise-performance-savvio-fam/enterprise-performance-15k-hdd/ent-perf-15k-5/en-us/docs/100748152b.pdf

      Come to think of it: if discs are made with SSD-like lifetime limitations, that must mean the manufacturer put an SSD into the box as cache.

  4. Francis Boyle Silver badge

    Surely the market for these sort of devices is pretty much write once read never applications. I can't see the typical user whether it's FB with it cat pictures or me with my er, reference materials, hitting the limit.

  5. Adam 1

    > However, there’s no direct information in the spec sheets to say drives are warrantied for data written. In fact, terms such as “designed for” are used more often, so where do we stand with the warranty?

    In Australia, it's actually pretty simple.

    https://www.accc.gov.au/consumers/consumer-rights-guarantees/consumer-guarantees

    Companies can include or exclude whatever they want; it doesn't reduce makes no your rights under consumer law. Unless that writes/year is clearly stipulated in the box, visible before you make the purchase, they can't enforce it (won't stop them trying of course). They don't even provide an easy way to measure how much has been written, so it would be difficult to say the least for them to enforce even if they suspected you were "naughty".

    1. Peter Gathercole Silver badge

      @Adam 1

      The S.M.A.R.T data maintained by the drive actually does contain counters of all sorts, which includes the total amount of writes, I believe, so that could be used to try to enforce this type of limit.

    2. Alan Brown Silver badge

      "They don't even provide an easy way to measure how much has been written"

      It's been visible in SMART results for a couple of years on spinners - started with SSDs but has moved to spinners too (includes read stats too)

  6. Jay 2

    You can never have too much disk space (or too much memory)

    Whilst it's all very nice being able to squish TBs of data into the usual 3.5/2.5" packages I'm always a bit squeemish about getting the latest/greatest/biggest hard drives. Though on whatever article it was the other day, I wasn't expecting write limts (or whatever) on a hard disk (opposed to an SSD).

    Not really used SSD myself in a big way. At work we've got a few as 'expendable' scratch area media. At home my iMac has one of Apple's Fusion drives in it, which seems to be fast enough. I'm not sure about laptops with only an SSD in, I giess as long as they're easily user-replaceable then there's no reason why not.

    1. John Robson Silver badge

      Re: You can never have too much disk space (or too much memory)

      SSDs are the single most significant upgrade you can make to most machines.

      A Fusion drive is just a diddy SSD with a larger HDD behind it.

    2. jbuk1

      Re: You can never have too much disk space (or too much memory)

      A drive with no moving parts is by far preferable to the alternative in a laptop.

      Really not sure why you think SSD's in laptops are a bad idea?

      1. asdf

        Re: You can never have too much disk space (or too much memory)

        >Really not sure why you think SSD's in laptops are a bad idea?

        Probably due to yet another employer being penny wise but pound foolish. Take the taste test. Buy yourself a fairly cheap $70 ssd drive (can get 240gig for that online) and use it as a boot drive for a week on any Winows 7+ machine (or Mac or even shudder systemd Linux) and you will never want to boot off spinning rust again. It makes hibernation unnecessary.

        1. Version 1.0 Silver badge

          Re: Really not sure why you think SSD's in laptops are a bad idea?

          Because when SSDs go tits up they sink to the bottom of the pool HDD's tend to float on the surface. SSD are wonderful most of the time but if you have an "event" then the chances of getting up and running without a complete re-image are not good.

          On the plus side - it one takes one "event" to persuade management that backups are a really good idea so there's a silver lining to this - SSD's mean much faster performance and eventually lead to better backups (on HDDs).

          1. Steven Jones

            Re: Really not sure why you think SSD's in laptops are a bad idea?

            I've installed 6 SSDs (the oldest is a 256GB Crucial bought almost 5 years ago - and very expensive it was). All are still in use (and some in their second machine). In contrast, I've had perhaps a dozen HDDs and I've had three sudden failures. Note that not all these failures were complete, but the disks became essentially unusable due to unrecoverable errors. Maybe a specialist could get the data back with the right equipment, but I couldn't.

            My experience (admittedly not a statistically large selection) is that SSDs have been very reliable and that HDDs can, and so fail suddenly. In any event, the first rule of IT is make sure you can recover everything important. Don't ever rely on a single device.

        2. PaulFrederick

          Re: You can never have too much disk space (or too much memory)

          what is this boot thing that you speak of?

    3. TechBearMike

      Re: You can never have too much disk space (or too much memory)

      MacBook Pro, 15", 2015 model with SSD: blows the previous model I had out of the water, speed-wise. The SSD is warp speed, all the time. I don't want a spinning rust drive in my primary laptop again, ever. The only problem was that the maximum size offered by Apple was 500GB, which barely contains my music and photo libraries. Now that I've learned the SSD is upgradeable by the consumer, I'm upgrading to a 1TB SSD, the capacity I had in my previous MBP.

  7. Lusty

    "so where do we stand with the warranty?

    I need to check some of the detailed product sheets. "

    I have the same question, and I agree you should have checked and included the information!

    Regarding rebuild times, this is essentially irrelevant. The problem with large drives is with recovery of the data not with time taken to do so. With the expected unrecoverable error rates of current drives you probably couldn't even read a whole 16TB disk successfully in one go, so mirroring is useless, RAID 5 is useless, and RAID 6 is probably useless. Erasure coding might help.

    Reliability could be improved through internal data resilience at the cost of some drive space. Many "drive failures" are not actually drive failures at all so internal resilience could be used to recover some of the UREs.

    1. Alan Brown Silver badge

      "RAID 5 is useless, and RAID 6 is probably useless. Erasure coding might help."

      Raid6 is almost usless - we've lost raid6 arrays during rebuild.

      That's one of the reasons I moved to RAIDz3 a few years ago. the other being that ZFS is very good at detecting errors (far better than the 10E-14 that HDDs have for ECC) and flagging drives which are playing up long before it shows in SMART stats.

      1. YetAnotherLocksmith Silver badge

        Because SMART isn't. YMMV but I've seen discs that don't work, yet SMART says all is well, & I've seen disks with dodgy SMART results that have worked for ages after.

        1. Mikey

          S.M.A.R.T.

          Sometimes May Actually Report Truthfully....

          Some Marketing Arse Requested This...

          Selective Metric Analysis Reduces Tolerance...

          System Makes All Reliability Transient

  8. batfastad
    Headmaster

    some some

    > Unless we some some magical breakthrough

    Yes, I have nothing better to do.

  9. Peter Gathercole Silver badge

    All of this also ignores...

    ... the read-writes that go on under the covers performed by most RAID controllers to prevent bitrot. It could very well be that there is further 'amplification' (and, it should be noted, will also happen as long as the RAIDset is powered, even if it is not being actively written to.

    This probably makes it even more important to not buy all of the disks in a RAIDset at the same time or from the same batch of disks.

    1. John Greenwood

      Re: All of this also ignores...

      Hehe,

      The argument for selecting drives from the same batch for a RAID is another big question.

      a) The "probability" of having a dodgy production disk increases when you have disks from different batches.

      b) The "probability" of multiple concurrent disk failures increases with disks from the same batch.

      More single drive failures or risk of a total data loss? :-) I've always gone for a single batch, but with that RAID backed up to a totally different device (hardware, software and drives).

      1. YetAnotherLocksmith Silver badge

        Re: All of this also ignores...

        I use different manufacturers now. Once had a RAID that died, & the second (paired) disk died literally two hours later during the restore! Cue data recovery required.

      2. Peter Gathercole Silver badge

        Re: All of this also ignores... @John

        Ah, but more frequent single failures in a raid set is an annoyance, but not putting your data at risk (as long as you replace the failed disks)

        Multiple concurrent failures risk your data!

        I will opt every time for a scenario where I have to replace single drives more frequently, as opposed to one with less frequent work, but increased risk of data loss.

  10. Dr Spork
    Pirate

    The limits seem to be arbitrary absolute data values and not some product of the capacity of any particular model... so it seems to me that the spinning rust pushers are trying to indemnify their "warranties" against covering normal wear to mechanical head/arm components and the like, or some newfangled flash buffer as JP suggests, rather than the plates of rust themselves.

    Strikes me as pretty slimy.

    1. Anonymous Coward
      Anonymous Coward

      Warranties never cover normal wear and tear, they just cover manufacturing defects. If they're now adding expected read/write lifespans to their drives what you should be asking is how these values compare to previous models. Then you should buy the drive whose warranty covers the lifespan you need ;)

  11. Duncan Macdonald

    Why not bigger drives

    For the large capacity lower performance market (array sizes in multiple petabytes) why do drives have to be limited to 3.5 or 2.5 inch sizes? Using the larger 5 1/4 inch size would allow far more data to be stored per drive.

    1. Lusty

      Re: Why not bigger drives

      Mechanics. Large platters wobble more so you can't have the tight tolerances required to get data density. The tradeoff is worth it, hence we've not had physically big disks for a long time. They could make them taller, but there would be no point as you can double capacity with two drives that way anyway.

      Clever people have increased density by adding drives through to the back of the chassis, and larger drives wouldn't help there much if at all.

      If you think it through, and read my comment above, we've actually hit a point where larger drives are not desirable. I'd rather see smaller form factors with greater density but lower capacities. This would allow sensible data protection schemes and less disruptive drive replacement which are very nice in the petabyte scale!

    2. Voland's right hand Silver badge

      Re: Why not bigger drives

      We were there:

      https://en.wikipedia.org/wiki/Quantum_Bigfoot

      The industry decided not to repeat the experiment.

    3. dajames

      Re: Why not bigger drives

      ... why do drives have to be limited to 3.5 or 2.5 inch sizes? Using the larger 5 1/4 inch size would allow far more data to be stored ...

      The larger the platter the more energy it takes to spin it up (and down) and the greater the gyroscopic force on the spindle (causing wear) if the disk is moved while spinning. Large platters also need longer, and so stronger, and so heavier, arms for the read/write heads, so these have more inertia, which increases the energy needed to move them and increases the track-to-track access and head-settling times.

      Also, the more energy you need, the more cooling you need ...

      There's a reason drive sizes have been getting progressively smaller and smaller.

    4. Steven Jones

      Re: Why not bigger drives

      I recall 14" drives which span at 3,600 RPM. Now just try imagining what the latency and seek time figures looked like. Now try engineering such a beast with current data densities and imagine how many 10s of TB there would be to access with such slow access speeds.

      Don't think you can spin this thing any faster. Try it and you'll find the forces are such that the aluminium platter will stretch at the outside and the platter will ripple. That's let alone trying to speed it up to the 15K RPM you see on the enterprise drives. By that time the peripheral speed is approaching the speed of sound (I suppose helium enclosure might help).

      nb 14" drives running at even 3600 RPM could be dangerous - I'd heard of one in a data centre where the bearing sized, the shaft sheered and it wrecked several cabinets. There's a lot of kinetic energy in a 14" platter spinning at 3,600 RPM.

      There's a reason why drives got smaller and smaller...

      1. Alan Brown Silver badge

        Re: Why not bigger drives

        "That's let alone trying to speed it up to the 15K RPM you see on the enterprise drives."

        It's worth noting that virtually all 15krpm or faster drives use 2.5" or smaller platters - made of glass.

        WRT kinetic energy: Not HDDs but a couple of other examples of spinning things running amok:

        A centrifuge in a university biology lab had bearing failure at 10,000rpm. The 10kg rotor exited the building via a concrete block wall (5th floor) after blazing a trail of destruction across the lab. Thankfully it landed in a lawn and dug itself a large hole before it could travel any further.

        In the 1970s a 2MW hydro turbine lost lubrication and exited the generator room after levering itself out of the concrete floor. It was found several miles downstream. Had it gone in the other direction there's a good chance it would have destroyed the dam.

        1. 404

          Re: Why not bigger drives

          This one time, at band camp, I had a cd come spinning out of the cd tray at about Mach 15, and hit me right in the throat. It hurt.

          Ok, it wasn't band camp. Musical instruments are expensive. But it did hurt, even cut me a little.

          ;)

  12. Anonymous Coward
    Anonymous Coward

    We use

    Seagate 2TB drives here, in the raid array for the CCTV system. These drives have a 3 year warranty. In the last 6 months i have returned 4 to be replaced as they have all failed with the same error.

    Different batches, but same write failure error.

    1. Lusty

      Re: We use

      If you update firmware regularly you might find that error is actually "disk from a known bad batch". Vendors quite often pre-fail disks to pre-empt data loss if they know a batch is dodgy.

    2. Anonymous Coward
      Anonymous Coward

      Re: We use

      We have a stack of 6 3TB Seagate surveillance HDDs spare, warranty replacements from 6 that failed in quick succession in a Ceph cluster.

      The actual drives in the hosts were replaced by consumer Hitachis prior to the duds being sent back to Seagate. We haven't been game to try the replacement disks in any machines yet. So far, the Hitachis have out-lived them.

      1. Anonymous Coward
        Anonymous Coward

        Re: We use

        You never want to use drives from the same batch in a RAID set, because of the massive difference in per lot failures. With different lot drives versus identical drives, you are less likely to have second failure after the first. You can't guarantee different lots, but you could increase your chances by buying from different places, instead of ordering them all from one spot.

        I consulted for a company once that had a lot of small RAID arrays (JBODs attached to servers running software RAID) that generally ordered drives in batches to build multiple servers. They'd sort through them to match up dissimilar lot numbers (and ideally factory locations) and send them back unopened for replacement from their VAR if they had too many from the same lot for their mix-n-match strategy. I was a bit skeptical of all this at first, but they had a rather anal guy who developed this policy who kept track of drive failures in this way since he started working there, and sent me a big spreadsheet to prove his point :)

        1. Christopher E. Stith

          Re: We use

          If you're doing software RAID on top of JBOD you don't need to stick to the same manufacturer. Buying some Toshiba, some Hitachi, and some Seagate makes getting the same lot much less likely.

    3. Alan Brown Silver badge

      Re: We use

      "Seagate 2TB drives here, in the raid array for the CCTV system"

      Barracuda DL/DMs or Constellations, by any chance?

      1. YetAnotherLocksmith Silver badge

        Re: We use

        Just avoid 3Gb drives - they appear to have far higher failure rates than 2 or 4Gb disks.

        (This was tested across loads of disks, there's an article on here somewhere about it I think)

  13. AMBxx Silver badge
    Boffin

    Help me out here

    How important are these figures in the real world? Are companies hitting this sort of data volume in real life? To hit the numbers, are you talking about constantly writing data night and day?

    1. Steven Jones

      Re: Help me out here

      Absolutely they did. I recall several years ago that the drives in enterprise arrays were constantly being driven. We followed a policy of "stripe everywhere" and spread many, many workloads over the same enterprise arrays. Different workloads were busy at different times, and on a modern very large enterprise there simply aren't quiet periods. If it wasn't running OLTP for the call centres or as the backend for the online presence, it was running through vast amount of overnight work, extracting information for datawarehousing, accepting orders, configuration updates, masses of (external diagnostic information). That's without making backups, taking snapshots and so on.

      The only way to get efficient use of (very expensive) enterprise storage and get the maximum throughput was to spread the workload as widely as possible and use statistics to balance the load. Even then, back end HDDs would easily be hitting 50%+ utilisation for much of the 24 hour day and much higher at other times.

      I/O latency was always an issue with HDDs, even with massive amounts of array and database caching. It was an issue only resolved when SSDs came along.

      nb. I must emphasise that these were enterprise disks (typically relatively reduced capacity running at 10k or 15k and they failed - but very large enterprises depend on maintenance contracts and hot-swapping failing drives warrantees are completely immaterial - enterprise arrays are defined by availability - basically 100% and anything else causes a meltdown of relationships with suppliers).

  14. Michael Sanders

    Great research. That was a very informative article about the state of affairs. My two cents is that flash is taking over and its pretty obvious. If it continues to get bigger and cheaper at the rate is has been. And I don't see any reason why it shouldn't. And it's also reasonable to expect that the technology that's used in SSD will become less delicate and more permanent.

  15. Computurd

    These specifications are nothing new - they have been on HDDs for years. It isnt due to shingling, only one of the models even has shingles, the Seagate 8TB, the rest do not.

    A comparison of DWPD (Drive Writes Per Day) is frivolous with an 8TB volume.

  16. Frumious Bandersnatch

    could be just

    some sort of "retconning" (retroactive continuity) or whatever that word is* for when some new tech becomes the new normal and we begin to look at the old tech through the lens of the new. Unlike something like "horse-power", where we do the opposite.

    I always thought that the number of power cycles was the main reason spinning disks failed, though. Can rust wear out? Or does it, as Neil Young would have it, never sleep?

    * the word I was looking for was probably "back-formation", it seems

  17. Nigel 11

    DWPD - wrong statistic?

    Drive Writes per day makes sense for an SSD because the storage medium suffers wear and tear from being written to.

    Writing to a magnetic surface should have no effect on it. What does wear out a hard disk is seeking. So if they are trying to differentiate desktop drives from enterprise ones, they should quote a maximum number of head movements per day. (This broadly equates to IOs per day, read or write being irrelevant).

    Unless the head technology has now become so ultra-miniaturized, that the total time spent writing is now the life-determining parameter, rather than the amount of mechanical head-seeking. But this will be a function of data density, and so there won't be much diffrence between desktop and enterprise drives (absent some super-good and -expensive head technology that isn't widely known?)

  18. Steven Jones

    This is not news. There has always been a difference between enterprise disks and standard ones. I recall about 7 or 8 years ago dealing with an HP array where there were two different HDD options, with the higher capacity one having a much reduced duty cycle, especially insofar as random access was concerned (unfortunately the specification was less than useful in defining what it meant be random rather than sequential access or how it was to be measured).

    In the case of the top-level enterprise HDDs, we'd hammer them and they were hot-replaced as they failed (or were showing signs of failure) under the maintenance contracts. The primary issue was if performance was compromised during any RAID rebuild processes.

  19. martinusher Silver badge

    How are the drives being used?

    Back 'in the old days' when processing and storage costs were higher there was a well defined hierarchy of information storage varying from high speed local storage to low cost bulk storage that made optimum use of resources. Since we've been in a situation where both processing and storage costs are unrealistically low people don't think that storage hierarchy matter -- you just max everything out, its cheap -- but then you run into other costs related to finding and archiving material. Part of that cost is that large disks have finite seek times and usage lives.

    Just as unrestricted processing costs have resulted in code bloat that negates the advantages of having high performance processing unrestricted storage costs have resulted in abuse where we've turned our information storage areas into the logical equivalent of a hoarder's world. Instead of asking ourselves why we've got all this stuff we're just demanding bigger houses to put it in.

    1. Charles 9

      Re: How are the drives being used?

      "Instead of asking ourselves why we've got all this stuff we're just demanding bigger houses to put it in."

      Why do we have all this stuff? Because just when you need it the most, the original source up and vanishes without a trace. Many of us have run afoul of this firsthand, so the mindset is "better safe than sorry" and "get it before it's gone." You can always get more storage (if it isn't stacks of hard drives now, it's books of CDs or boxes of floppy discs then). It's a lot harder to resurrect a site that doesn't exist anymore.

  20. Anonymous Coward
    Anonymous Coward

    S.M.A.R.T. provides this data, you had to know they'd use it

    While they could theoretically refuse warranty service by looking at the data volumes, I highly doubt they are doing that. At least, not yet...drive makers are hurting though so getting more stringent on warranty replacement is probably a strategy they are considering. Anyway, the recommended DWPD figures might help dissuade use of a consumer drive in a corporate server where previously if you didn't need 15K level of performance you could save a bit of money buying the cheaper variety if you figured RAID would protect you from data loss.

    The high performance 'enterprise' drive is already pretty much dead, and the general purpose 'consumer' drive has only a year or two left. Remember, they don't need to equip PCs with 4 TB drives, a once a 1 TB SSD reaches price parity with a 1 TB HDD which I'd guess happens around 2020, all PCs will ship with a SSD and you will add a HDD at extra cost if you need the additional storage.

    The next to fall will be the video drive, in a few years when SSDs come close enough to price parity with HDDs. Endurance-wise, my back of the envelope calculations indicate using a 1 TB Samsung drive in a DVR recording six HD tuners 24x7 would last about two years before it hits its rated write limit. So endurance needs to be improved, but not THAT much. Double that endurance and you should be fine - after all Storage Review's tests indicate that many SSDs last several times longer than the rated life, and large block writes as in a DVR are the optimal case. A four year rated life that is going to be 6-12 years in reality should be fine especially since S.M.A.R.T is able to tell you when it is running out of relocatable blocks.

    Once video drives fall, only "capacity" drives that use SMR will remain. They will be used for 'cold' storage that has few writes so neither the lifetime nor crappy write latency will matter.

    1. Alan Brown Silver badge

      Re: S.M.A.R.T. provides this data, you had to know they'd use it

      "drive makers are hurting though so getting more stringent on warranty replacement is probably a strategy they are considering"

      They've already reduced 5/3 year warranties to 1/3 year ones and pushed prices up. Customers have noticed.

      If they start rejecting warranty returns then they'll turn the steady rate of SSD replacing into a stampede. The warranty on 2 and 4TB SSDs makes HDDs look ill and the reduced power plus performance boost means that "high speed" arrays are superfluous (ie the extra cost of the drives gets more than made up for by not having to buy another array to handle high IOPs stuff)

    2. Alan Brown Silver badge

      Re: S.M.A.R.T. provides this data, you had to know they'd use it

      "Once video drives fall, only "capacity" drives that use SMR will remain. They will be used for 'cold' storage that has few writes so neither the lifetime nor crappy write latency will matter."

      Once video drives fall, SMR might be the only HDD type remaining but reduced volumes will probably make them more expensive than LTO8

    3. Anonymous Coward
      Anonymous Coward

      Re: S.M.A.R.T. provides this data, you had to know they'd use it

      > "Remember, they don't need to equip PCs with 4 TB drives"

      Whilst it's a niche thing, some of us need bulk local storage. Modern high-quality digital stills cameras are capable of generating 70MB image files with a framerate of 5fps (sometimes this is needed - moving subject, etc), which chews through space like no tomorrow (it's entirely possible to burn through 10-20GB in a day's shooting). Whilst their sequential data rate is fine, HDDs with lower rotational rates tend to have noticably higher latency, which can be a pain if you're batch-processing 200+ images. Sometimes a NAS isn't appropriate (e.g. space issues), and nor is cloud storage (e.g. limited upstream rate).

      I don't want to think what 4K video is like on storage ...

      1. Charles 9

        Re: S.M.A.R.T. provides this data, you had to know they'd use it

        You should see a modern digital filming session which not only uses at least 6K resolutions with high color gamut but also sometimes film above 48fps AND have to minimize the compression due to the need to minimize generation artifacts during digital editing (if they're allowed to use any at all). So you have the need for high throughput, high capacity, AND high churn all at the same time. From what I hear, the need for high everything basically restricts them to very expensive,very specialized equipment, and forget about transmitting this stuff over even dedicated fiber. Most of the time, a courier with a hard drive is both cheaper and faster for transport during the production stage.

  21. YARR
    Boffin

    Re-inventing the disk drive --> Magnetic scrolls

    If magnetic disks are heading towards archival storage, maybe it's time to re-invent the disk drive by merging it with magnetic tape?

    Magnetic tape heads can read multiple data tracks in parallel giving a higher data rate for a slower speed. Combining parallel head technology of tapes with mobile head technology from disk drives could achieve higher capacity than disks with a lower seek time than tape. The magnetic medium would be much wider than tape - like a magnetic parchment passing between two scrolls. However the scroll length would be short enough to keep the seek time to only a few seconds. The scrolls could be wound onto a single roll permitting them to be removable like a camera film. What could possibly go wrong?

    1. Anonymous Coward
      Anonymous Coward

      Re: Re-inventing the disk drive --> Magnetic scrolls

      Tape switched from the linear scanning of an audio cassette player to helical scanning years ago.

      1. Alan Brown Silver badge

        Re: Re-inventing the disk drive --> Magnetic scrolls

        "Tape switched from the linear scanning of an audio cassette player to helical scanning years ago."

        None of the capacity tape mechanisms use helical scanning. The last to do so (SAIT) went out of production several years ago.

        LTO is linear multipass, serpentine format.

  22. razorfishsl

    It should be based more on head seek than data written.

    Full surface seeks are far more stressful than track to track skipping.

    It is more like the filesystems need to be looked at, so that drive seeks operate in clear regions related to each other.

    There should also be facility for the drives to read & re-write data once every few months , as a back ground task, taking care of drive & magnetic tolerances slipping.

  23. Wayne Sheddan

    Device reliability - overemphasized anyway?

    Storage - fast, cheap, reliable, size: choose any 3.

    At FAST '16 Eric Brewer (VP Infrastructure, Google) indirectly and correctly reiterated that reliability is a SYSTEM level capability - NOT a DEVICE level capability. As such, since most bulk storage is now using distributed replicas or erasure coding. Hence Eric opined that the go-forward requirement of an HDD device is for fast and cheap capacity and let the system deal with reliability.

    Most things like filers or object stores border on being WORM anyway.... the working set is perpetually small and constantly shrinking as a portion of the total size. HDD is becoming largely for these vast swathes of stagnant nearline data that people should nstead be consigning to /dev/null but cant bring themselves to do so. Worrying about DWPD makes no sense in this space.

    And lets face it - if HDD cant outstrip solid state in terms of capacity its just a matter of time. Dropping device reliability will be an issue of survival.

  24. YetAnotherLocksmith Silver badge

    Surely an array of redundant flash chips?

    Surely someone could make a "HDD" that simply takes SD cards in an array, & handles the wear leveling at a higher level (as well as the in-built on the individual cards)

    You plug in a few ?Gb, ??Gb or ???Gb (micro?)SD Cards, and the controller, in the form of a regular HDD sized thing, gets on with it. Uses JBOD architecture or some fancy RAID, according to your tastes, & presents as a standard SSD/HDD.

    Completely removes a single point of failure too as if the controller dies just put the SD cards in another controller. If any one card dies, you get an alert and you swap that card.

    This already exists, doesn't it? (It's too simple and obvious)

    1. Charles 9

      Re: Surely an array of redundant flash chips?

      You forget that removeable media tends to get the LOWEST quality of Flash chips. They get the leftovers which means they're the lowest in reliability and only intended for occasional use. Odds are greater you'll get multiple concurrent card failure (so you're more likely to lose data). Furthermore, the point about controller failure is that only the controller knows how the chips are arranged, so when it goes, it's a lot like losing your big password: no one else knows how to reconstruct the data. That's why you can't just swap a controller chip or the like when an SSD fails.

  25. ExoColonist

    One possible direction

    Approximately a year ago, we (the company I work for, to be exact) were negotiating/designing a new class of storage cluster for use in our primary product, large scale private/business Web hosting.

    About half way through the negotiations one of the storage vendors break the mold and changes their design from a traditional 3-tier design (lots of 7K spindles, some 10K spindles and a couple of handful SSDs) to a 2-tier 2:3 ratio SSD/7K construction. This happened to be the best mix of price and performance and ended up being the offer we went with and I, later, learned that that change of design was, in all likelihood, motivated by Samsungs introduction of their 3.84TB SSDs.

    This move is, to me, a clear indication of the way we are headed, I think SSDs will continue gobbling up the entire performance-focused drive market, I think 7K disks will take a while to die, but it looks inevitable, to me, that that segment will shrink as SSD price/TB slowly nears that of spinning rust.

  26. SeanC4S

    http://uk.emc.com/about/news/press/2016/20160229-03.htm

    https://youtu.be/1xurec_UO60

  27. wsm

    Fast, big, cheap

    When it's a business that's buying, it's got its priorities in order: fast, big, cheap. When consumers buy, or when they buy from manufacturers/assemblers that want to please them, it's the other way around: cheap, big, fast.

    When SSDs can satisfy both sets of buyers, HDDs with all of their disadvantages will be a thing of the past.

    1. Anonymous Coward
      Anonymous Coward

      Re: Fast, big, cheap

      And that may take a while given, from what I hear, the capital costs for rust are still at least an order of magnitude cheaper than for flash, even 3D flash. Plus it's still easier to ramp up an existing rust plant to handle larger capacities than it is for Flash, which also faces a supply issue last I read. In other words, it's a lot tougher for a flash plant to reach break-even amortization in the current environment whereas most rust plants passed it a while back.

  28. Anonymous Coward
    Anonymous Coward

    SAS

    This is why I made the jump to SAS. For HDDs, the increase isn't that much, and the MTBF and the unrecoverable sector rates are much better.

    For SSDs though, SAS is sadly $$$

  29. prof_peter

    RTFWP (Read The Fine White Paper)

    George Tyndall, “Why Specify Workload?,” WD Technical Report 2579-772003-A00

    http://www.wdc.com/wdproducts/library/other/2579-772003.pdf

    Basically the heads are so close to the platter that they touch occasionally. To keep the drives from dying too quickly, the heads are retracted a tiny bit when not reading or writing. From the article:

    "As mentioned above, much of the aerial density gain in HDDs has been achieved by reducing the mean head-disk clearance. To maintain a comparable HDI [head-disk interface] failure rate, this has required that the standard deviation in clearance drop proportionately. Given that today’s mean clearances are on the order of 1 – 2nm, it follows that the standard deviation in clearance must be on the order of 0.3nm – 0.6nm. To put this into perspective, the Van der Waals diameter of a carbon atom is 0.34nm. Controlling the clearance to atomic dimensions is a major technological challenge that will require continued improvements in the design of the head, media and drive features to achieve the necessary reliability.

    In order to improve the robustness of the HDI, all HDD manufactures have in the recent past implemented a technology that lessens the stress at this interface. Without delving into the details of this technology, the basic concept is to limit the time in which the magnetic elements of the head are in close proximity to the disk. In previous products, the head-disk clearance was held constant during the entire HDD power-on time. With this new technology, however, the head operates at a clearance of >10nm during seek and idle, and is only reduced to the requisite clearance of 1 – 2nm during the reading or writing of data. Since the number of head- disk interactions becomes vanishingly small at a spacing of 10nm, the probability of experiencing an HDI-related failure will be proportional to the time spent reading or writing at the lower clearance level. The fact that all power-on-time should not be treated equivalently has not been previously discussed in the context of HDD reliability modeling."

  30. RaidOne

    Maybe not relevant...

    But I stopped buying HDDs 5 years ago, and started buying SSDs at the same time. My first SSD, a 32 GB OCZ, is still working fine as the swap drive (!) on my home PC, which gets 3 - 4 hours of use per day.

    Then I bought a bunch of 128 and 256 SSDs for my home laptop, work laptop, wife laptop, media player (the wife's old laptop :) )

    Then twp 512 GB SSDs as main drives in my home PC. No SSD failure. I said good bye to the HDD industry that was happy to overcharge me when they failed to build factories on land that doesn't get flooded once a year, and I sweared in my beard I will never give them one more dollar.

    Even for work, we switched from 1 TB HDDs to 256 GB SSDs and no failure, on machines that are on 24/7. Happy camper here with SSDs.

  31. Anonymous Coward
    Anonymous Coward

    HDD would never be able to write at same rate as SSD. Let us do some math:

    At 100-200MB/sec max SQW, we can expect to write 200x24x3600MB per day or roughly 16TB/day which is roughly 1DWPD for 16TB. This is the theoretical limit.

    SSDs can run at 10x at least and so they can write 160TB per day. That would map to 10DWPD for 16TB

    What this means is the write amplification for SSD will be worse than HDD. Also users would likely be using smaller capacity SSD than HDD (cost being a factor) which means the DWPD for SSD should be more than 10x of HDD.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like