back to article Hold on a sec. When did HDDs get SSD-style workload rate limits?

A recent Register article reminded me that hard drive vendors are introducing warranty limits on their products for the amount of data that can be written to them in any one year. SSDs have always had this restriction; does the same now apply to HDDs? Solid state drives (SSDs) have always had this restriction because we know …

Page:

    1. Alan Brown Silver badge

      Re: We use

      "Seagate 2TB drives here, in the raid array for the CCTV system"

      Barracuda DL/DMs or Constellations, by any chance?

      1. YetAnotherLocksmith Silver badge

        Re: We use

        Just avoid 3Gb drives - they appear to have far higher failure rates than 2 or 4Gb disks.

        (This was tested across loads of disks, there's an article on here somewhere about it I think)

  1. AMBxx Silver badge
    Boffin

    Help me out here

    How important are these figures in the real world? Are companies hitting this sort of data volume in real life? To hit the numbers, are you talking about constantly writing data night and day?

    1. Steven Jones

      Re: Help me out here

      Absolutely they did. I recall several years ago that the drives in enterprise arrays were constantly being driven. We followed a policy of "stripe everywhere" and spread many, many workloads over the same enterprise arrays. Different workloads were busy at different times, and on a modern very large enterprise there simply aren't quiet periods. If it wasn't running OLTP for the call centres or as the backend for the online presence, it was running through vast amount of overnight work, extracting information for datawarehousing, accepting orders, configuration updates, masses of (external diagnostic information). That's without making backups, taking snapshots and so on.

      The only way to get efficient use of (very expensive) enterprise storage and get the maximum throughput was to spread the workload as widely as possible and use statistics to balance the load. Even then, back end HDDs would easily be hitting 50%+ utilisation for much of the 24 hour day and much higher at other times.

      I/O latency was always an issue with HDDs, even with massive amounts of array and database caching. It was an issue only resolved when SSDs came along.

      nb. I must emphasise that these were enterprise disks (typically relatively reduced capacity running at 10k or 15k and they failed - but very large enterprises depend on maintenance contracts and hot-swapping failing drives warrantees are completely immaterial - enterprise arrays are defined by availability - basically 100% and anything else causes a meltdown of relationships with suppliers).

  2. Michael Sanders

    Great research. That was a very informative article about the state of affairs. My two cents is that flash is taking over and its pretty obvious. If it continues to get bigger and cheaper at the rate is has been. And I don't see any reason why it shouldn't. And it's also reasonable to expect that the technology that's used in SSD will become less delicate and more permanent.

  3. Computurd

    These specifications are nothing new - they have been on HDDs for years. It isnt due to shingling, only one of the models even has shingles, the Seagate 8TB, the rest do not.

    A comparison of DWPD (Drive Writes Per Day) is frivolous with an 8TB volume.

  4. Frumious Bandersnatch

    could be just

    some sort of "retconning" (retroactive continuity) or whatever that word is* for when some new tech becomes the new normal and we begin to look at the old tech through the lens of the new. Unlike something like "horse-power", where we do the opposite.

    I always thought that the number of power cycles was the main reason spinning disks failed, though. Can rust wear out? Or does it, as Neil Young would have it, never sleep?

    * the word I was looking for was probably "back-formation", it seems

  5. Nigel 11

    DWPD - wrong statistic?

    Drive Writes per day makes sense for an SSD because the storage medium suffers wear and tear from being written to.

    Writing to a magnetic surface should have no effect on it. What does wear out a hard disk is seeking. So if they are trying to differentiate desktop drives from enterprise ones, they should quote a maximum number of head movements per day. (This broadly equates to IOs per day, read or write being irrelevant).

    Unless the head technology has now become so ultra-miniaturized, that the total time spent writing is now the life-determining parameter, rather than the amount of mechanical head-seeking. But this will be a function of data density, and so there won't be much diffrence between desktop and enterprise drives (absent some super-good and -expensive head technology that isn't widely known?)

  6. Steven Jones

    This is not news. There has always been a difference between enterprise disks and standard ones. I recall about 7 or 8 years ago dealing with an HP array where there were two different HDD options, with the higher capacity one having a much reduced duty cycle, especially insofar as random access was concerned (unfortunately the specification was less than useful in defining what it meant be random rather than sequential access or how it was to be measured).

    In the case of the top-level enterprise HDDs, we'd hammer them and they were hot-replaced as they failed (or were showing signs of failure) under the maintenance contracts. The primary issue was if performance was compromised during any RAID rebuild processes.

  7. martinusher Silver badge

    How are the drives being used?

    Back 'in the old days' when processing and storage costs were higher there was a well defined hierarchy of information storage varying from high speed local storage to low cost bulk storage that made optimum use of resources. Since we've been in a situation where both processing and storage costs are unrealistically low people don't think that storage hierarchy matter -- you just max everything out, its cheap -- but then you run into other costs related to finding and archiving material. Part of that cost is that large disks have finite seek times and usage lives.

    Just as unrestricted processing costs have resulted in code bloat that negates the advantages of having high performance processing unrestricted storage costs have resulted in abuse where we've turned our information storage areas into the logical equivalent of a hoarder's world. Instead of asking ourselves why we've got all this stuff we're just demanding bigger houses to put it in.

    1. Charles 9

      Re: How are the drives being used?

      "Instead of asking ourselves why we've got all this stuff we're just demanding bigger houses to put it in."

      Why do we have all this stuff? Because just when you need it the most, the original source up and vanishes without a trace. Many of us have run afoul of this firsthand, so the mindset is "better safe than sorry" and "get it before it's gone." You can always get more storage (if it isn't stacks of hard drives now, it's books of CDs or boxes of floppy discs then). It's a lot harder to resurrect a site that doesn't exist anymore.

  8. Anonymous Coward
    Anonymous Coward

    S.M.A.R.T. provides this data, you had to know they'd use it

    While they could theoretically refuse warranty service by looking at the data volumes, I highly doubt they are doing that. At least, not yet...drive makers are hurting though so getting more stringent on warranty replacement is probably a strategy they are considering. Anyway, the recommended DWPD figures might help dissuade use of a consumer drive in a corporate server where previously if you didn't need 15K level of performance you could save a bit of money buying the cheaper variety if you figured RAID would protect you from data loss.

    The high performance 'enterprise' drive is already pretty much dead, and the general purpose 'consumer' drive has only a year or two left. Remember, they don't need to equip PCs with 4 TB drives, a once a 1 TB SSD reaches price parity with a 1 TB HDD which I'd guess happens around 2020, all PCs will ship with a SSD and you will add a HDD at extra cost if you need the additional storage.

    The next to fall will be the video drive, in a few years when SSDs come close enough to price parity with HDDs. Endurance-wise, my back of the envelope calculations indicate using a 1 TB Samsung drive in a DVR recording six HD tuners 24x7 would last about two years before it hits its rated write limit. So endurance needs to be improved, but not THAT much. Double that endurance and you should be fine - after all Storage Review's tests indicate that many SSDs last several times longer than the rated life, and large block writes as in a DVR are the optimal case. A four year rated life that is going to be 6-12 years in reality should be fine especially since S.M.A.R.T is able to tell you when it is running out of relocatable blocks.

    Once video drives fall, only "capacity" drives that use SMR will remain. They will be used for 'cold' storage that has few writes so neither the lifetime nor crappy write latency will matter.

    1. Alan Brown Silver badge

      Re: S.M.A.R.T. provides this data, you had to know they'd use it

      "drive makers are hurting though so getting more stringent on warranty replacement is probably a strategy they are considering"

      They've already reduced 5/3 year warranties to 1/3 year ones and pushed prices up. Customers have noticed.

      If they start rejecting warranty returns then they'll turn the steady rate of SSD replacing into a stampede. The warranty on 2 and 4TB SSDs makes HDDs look ill and the reduced power plus performance boost means that "high speed" arrays are superfluous (ie the extra cost of the drives gets more than made up for by not having to buy another array to handle high IOPs stuff)

    2. Alan Brown Silver badge

      Re: S.M.A.R.T. provides this data, you had to know they'd use it

      "Once video drives fall, only "capacity" drives that use SMR will remain. They will be used for 'cold' storage that has few writes so neither the lifetime nor crappy write latency will matter."

      Once video drives fall, SMR might be the only HDD type remaining but reduced volumes will probably make them more expensive than LTO8

    3. Anonymous Coward
      Anonymous Coward

      Re: S.M.A.R.T. provides this data, you had to know they'd use it

      > "Remember, they don't need to equip PCs with 4 TB drives"

      Whilst it's a niche thing, some of us need bulk local storage. Modern high-quality digital stills cameras are capable of generating 70MB image files with a framerate of 5fps (sometimes this is needed - moving subject, etc), which chews through space like no tomorrow (it's entirely possible to burn through 10-20GB in a day's shooting). Whilst their sequential data rate is fine, HDDs with lower rotational rates tend to have noticably higher latency, which can be a pain if you're batch-processing 200+ images. Sometimes a NAS isn't appropriate (e.g. space issues), and nor is cloud storage (e.g. limited upstream rate).

      I don't want to think what 4K video is like on storage ...

      1. Charles 9

        Re: S.M.A.R.T. provides this data, you had to know they'd use it

        You should see a modern digital filming session which not only uses at least 6K resolutions with high color gamut but also sometimes film above 48fps AND have to minimize the compression due to the need to minimize generation artifacts during digital editing (if they're allowed to use any at all). So you have the need for high throughput, high capacity, AND high churn all at the same time. From what I hear, the need for high everything basically restricts them to very expensive,very specialized equipment, and forget about transmitting this stuff over even dedicated fiber. Most of the time, a courier with a hard drive is both cheaper and faster for transport during the production stage.

  9. YARR
    Boffin

    Re-inventing the disk drive --> Magnetic scrolls

    If magnetic disks are heading towards archival storage, maybe it's time to re-invent the disk drive by merging it with magnetic tape?

    Magnetic tape heads can read multiple data tracks in parallel giving a higher data rate for a slower speed. Combining parallel head technology of tapes with mobile head technology from disk drives could achieve higher capacity than disks with a lower seek time than tape. The magnetic medium would be much wider than tape - like a magnetic parchment passing between two scrolls. However the scroll length would be short enough to keep the seek time to only a few seconds. The scrolls could be wound onto a single roll permitting them to be removable like a camera film. What could possibly go wrong?

    1. Anonymous Coward
      Anonymous Coward

      Re: Re-inventing the disk drive --> Magnetic scrolls

      Tape switched from the linear scanning of an audio cassette player to helical scanning years ago.

      1. Alan Brown Silver badge

        Re: Re-inventing the disk drive --> Magnetic scrolls

        "Tape switched from the linear scanning of an audio cassette player to helical scanning years ago."

        None of the capacity tape mechanisms use helical scanning. The last to do so (SAIT) went out of production several years ago.

        LTO is linear multipass, serpentine format.

  10. razorfishsl

    It should be based more on head seek than data written.

    Full surface seeks are far more stressful than track to track skipping.

    It is more like the filesystems need to be looked at, so that drive seeks operate in clear regions related to each other.

    There should also be facility for the drives to read & re-write data once every few months , as a back ground task, taking care of drive & magnetic tolerances slipping.

  11. Wayne Sheddan

    Device reliability - overemphasized anyway?

    Storage - fast, cheap, reliable, size: choose any 3.

    At FAST '16 Eric Brewer (VP Infrastructure, Google) indirectly and correctly reiterated that reliability is a SYSTEM level capability - NOT a DEVICE level capability. As such, since most bulk storage is now using distributed replicas or erasure coding. Hence Eric opined that the go-forward requirement of an HDD device is for fast and cheap capacity and let the system deal with reliability.

    Most things like filers or object stores border on being WORM anyway.... the working set is perpetually small and constantly shrinking as a portion of the total size. HDD is becoming largely for these vast swathes of stagnant nearline data that people should nstead be consigning to /dev/null but cant bring themselves to do so. Worrying about DWPD makes no sense in this space.

    And lets face it - if HDD cant outstrip solid state in terms of capacity its just a matter of time. Dropping device reliability will be an issue of survival.

  12. YetAnotherLocksmith Silver badge

    Surely an array of redundant flash chips?

    Surely someone could make a "HDD" that simply takes SD cards in an array, & handles the wear leveling at a higher level (as well as the in-built on the individual cards)

    You plug in a few ?Gb, ??Gb or ???Gb (micro?)SD Cards, and the controller, in the form of a regular HDD sized thing, gets on with it. Uses JBOD architecture or some fancy RAID, according to your tastes, & presents as a standard SSD/HDD.

    Completely removes a single point of failure too as if the controller dies just put the SD cards in another controller. If any one card dies, you get an alert and you swap that card.

    This already exists, doesn't it? (It's too simple and obvious)

    1. Charles 9

      Re: Surely an array of redundant flash chips?

      You forget that removeable media tends to get the LOWEST quality of Flash chips. They get the leftovers which means they're the lowest in reliability and only intended for occasional use. Odds are greater you'll get multiple concurrent card failure (so you're more likely to lose data). Furthermore, the point about controller failure is that only the controller knows how the chips are arranged, so when it goes, it's a lot like losing your big password: no one else knows how to reconstruct the data. That's why you can't just swap a controller chip or the like when an SSD fails.

  13. ExoColonist

    One possible direction

    Approximately a year ago, we (the company I work for, to be exact) were negotiating/designing a new class of storage cluster for use in our primary product, large scale private/business Web hosting.

    About half way through the negotiations one of the storage vendors break the mold and changes their design from a traditional 3-tier design (lots of 7K spindles, some 10K spindles and a couple of handful SSDs) to a 2-tier 2:3 ratio SSD/7K construction. This happened to be the best mix of price and performance and ended up being the offer we went with and I, later, learned that that change of design was, in all likelihood, motivated by Samsungs introduction of their 3.84TB SSDs.

    This move is, to me, a clear indication of the way we are headed, I think SSDs will continue gobbling up the entire performance-focused drive market, I think 7K disks will take a while to die, but it looks inevitable, to me, that that segment will shrink as SSD price/TB slowly nears that of spinning rust.

  14. SeanC4S

    http://uk.emc.com/about/news/press/2016/20160229-03.htm

    https://youtu.be/1xurec_UO60

  15. wsm

    Fast, big, cheap

    When it's a business that's buying, it's got its priorities in order: fast, big, cheap. When consumers buy, or when they buy from manufacturers/assemblers that want to please them, it's the other way around: cheap, big, fast.

    When SSDs can satisfy both sets of buyers, HDDs with all of their disadvantages will be a thing of the past.

    1. Anonymous Coward
      Anonymous Coward

      Re: Fast, big, cheap

      And that may take a while given, from what I hear, the capital costs for rust are still at least an order of magnitude cheaper than for flash, even 3D flash. Plus it's still easier to ramp up an existing rust plant to handle larger capacities than it is for Flash, which also faces a supply issue last I read. In other words, it's a lot tougher for a flash plant to reach break-even amortization in the current environment whereas most rust plants passed it a while back.

  16. Anonymous Coward
    Anonymous Coward

    SAS

    This is why I made the jump to SAS. For HDDs, the increase isn't that much, and the MTBF and the unrecoverable sector rates are much better.

    For SSDs though, SAS is sadly $$$

  17. prof_peter

    RTFWP (Read The Fine White Paper)

    George Tyndall, “Why Specify Workload?,” WD Technical Report 2579-772003-A00

    http://www.wdc.com/wdproducts/library/other/2579-772003.pdf

    Basically the heads are so close to the platter that they touch occasionally. To keep the drives from dying too quickly, the heads are retracted a tiny bit when not reading or writing. From the article:

    "As mentioned above, much of the aerial density gain in HDDs has been achieved by reducing the mean head-disk clearance. To maintain a comparable HDI [head-disk interface] failure rate, this has required that the standard deviation in clearance drop proportionately. Given that today’s mean clearances are on the order of 1 – 2nm, it follows that the standard deviation in clearance must be on the order of 0.3nm – 0.6nm. To put this into perspective, the Van der Waals diameter of a carbon atom is 0.34nm. Controlling the clearance to atomic dimensions is a major technological challenge that will require continued improvements in the design of the head, media and drive features to achieve the necessary reliability.

    In order to improve the robustness of the HDI, all HDD manufactures have in the recent past implemented a technology that lessens the stress at this interface. Without delving into the details of this technology, the basic concept is to limit the time in which the magnetic elements of the head are in close proximity to the disk. In previous products, the head-disk clearance was held constant during the entire HDD power-on time. With this new technology, however, the head operates at a clearance of >10nm during seek and idle, and is only reduced to the requisite clearance of 1 – 2nm during the reading or writing of data. Since the number of head- disk interactions becomes vanishingly small at a spacing of 10nm, the probability of experiencing an HDI-related failure will be proportional to the time spent reading or writing at the lower clearance level. The fact that all power-on-time should not be treated equivalently has not been previously discussed in the context of HDD reliability modeling."

  18. RaidOne

    Maybe not relevant...

    But I stopped buying HDDs 5 years ago, and started buying SSDs at the same time. My first SSD, a 32 GB OCZ, is still working fine as the swap drive (!) on my home PC, which gets 3 - 4 hours of use per day.

    Then I bought a bunch of 128 and 256 SSDs for my home laptop, work laptop, wife laptop, media player (the wife's old laptop :) )

    Then twp 512 GB SSDs as main drives in my home PC. No SSD failure. I said good bye to the HDD industry that was happy to overcharge me when they failed to build factories on land that doesn't get flooded once a year, and I sweared in my beard I will never give them one more dollar.

    Even for work, we switched from 1 TB HDDs to 256 GB SSDs and no failure, on machines that are on 24/7. Happy camper here with SSDs.

  19. Anonymous Coward
    Anonymous Coward

    HDD would never be able to write at same rate as SSD. Let us do some math:

    At 100-200MB/sec max SQW, we can expect to write 200x24x3600MB per day or roughly 16TB/day which is roughly 1DWPD for 16TB. This is the theoretical limit.

    SSDs can run at 10x at least and so they can write 160TB per day. That would map to 10DWPD for 16TB

    What this means is the write amplification for SSD will be worse than HDD. Also users would likely be using smaller capacity SSD than HDD (cost being a factor) which means the DWPD for SSD should be more than 10x of HDD.

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like