back to article I want to transplant your storage brains: WD desktop NAS refresh

WD has updated its small office Sentinel desktop filer range, adding Windows Storage Server 2012 with its Storage Spaces facility. The Sentinel storage servers are for small and medium businesses that don’t want or need a full-blown racked NAS system running on X86 processors. The DX4200 is basically the same as the 4-bay, 2 …

  1. petur

    Beyond RAID

    4TB in RAID1/5 is already stretching it, 6TB even worse. rebuild times become so ridiculously long....

    1. JEDIDIAH
      Devil

      Re: Beyond RAID

      > 4TB in RAID1/5 is already stretching it, 6TB even worse. rebuild times become so ridiculously long....

      No not really.

      This is not nearly as dire as everyone with zero actual experience wants to make it out to be.

    2. JaimieV

      Re: Beyond RAID

      Since the array is accessible and usable while rebuilding, and you *of course* have a backup of it because RAID IS NOT BACKUP GUYS, how is a bit of speed loss while it's rebuilding an issue?

      1. Trevor_Pott Gold badge

        Re: Beyond RAID

        The issue isn't speed loss. the issue is "likelihood of experiencing an Unrecoverable Read Error (URE) during rebuild, thus causing a second disk to drop." Also Google "rebuild stress".

        There are two schools of thought on this:

        1) "Disk manufacturer failure statistics are so low that URE issues during rebuild are just a boogyman".

        Pro: The statement is partially correct. Manufacturer stats do basically say disks don't die.

        Con: Manufacturers lie. Whether the disk can be "reconditioned" or not is completely irrelevant. What matters is "did it drop out from the RAID". Here, statistics on UREs during rebuild are much higher.

        2) "Disks in RAID arrays tend to be bought at the same time, from the same place. Since bad disks tend to occur in runs, if one disk fails, there's a good chance it's neighbor will fail soon thereafter!"

        Pro: The beginning of the statement is true. Disks in any given array are probably from the same run of drives, and a "bad" run will have more disks than the mean fail.

        Con: Your chances of this being an issue in a 4-disk array are statistically irrelevant. "Same run" deaths tend to be something you encounter when talking about 24-36 disks in a given array, and it is one of the reasons we've tended to limit arrays to 16 disks. Statistically speaking, your chances of getting a defective pair in an 8 disk array - even from a bad run - is vanishingly small. RAID 6 (tolerating 2 disk failures) seems to scale well up to 36 disks.

        Of course, all of this was dun in the era of 2TB drives being "the new shiznizzle". The estimates then were that 4TB would be the end of RAID 5 (and that matches my experience, TBH), with 8TB being the end of RAID 6. (And guess what comes out this month...)

        RAID is not totally dead, but it's on life support. It's object storage or nothing from here on out. Even triple parity is just delaying the inevitable. Server SANs do object storage for VMs. For everything file, look to companies like Caringo.

      2. Anonymous Coward
        Anonymous Coward

        Re: Beyond RAID

        My experience with these type of boxes - and to be honest various RAID arrays on servers - is that often while arrays might *technically* be accessible during a rebuild, they are frequently not *usable*; far too slow performance to be considered usable so effectively your array is offline until the rebuild happens.

        And a typical backup solution isn't going to cover you for that time, especially if your backup is on offline media... how long is it going to take you to restore it, onto what, and how many hours or days old is it?

        So yes, size of hard disks in an array is a highly important consideration when you are considering recovery time from a drive failure.

    3. petur
      Meh

      Re: Beyond RAID

      I'm actually amazed to get downvoted on this... last time I've seen a RAID5 rebuild fail with an error on a second disk was < 6 months ago. And those were only 2TB disks.

      Yes, it's not a backup, but a RAID rebuild is way more convenient, than a full reinstall + backup restore (which will never have all latest changes unless you were on realtime replication, and IMHO that is NOT a good backup strategy)

  2. vmistery

    What use is 45 usb3 ports on this new one?! Is that for some crazy new expansion capability or just a typo :(

  3. Neoc
    Meh

    Meh.

    I am a home user. A while back, I got tired of trying to find the movie/TV-series DVD I wanted to watch and started ripping my DVDs to hard-drive. ...and promptly ran out of drive-space which prompted me to look into NAS boxes.

    My first foray was building a rack-mounted HDD farm, shoving Linux on it and using software RAID. 9x2TB drives (plus a system drive). Nice but, as another commenter said, not a backup - if anything went wrong with the box itself (rather than a drive), I'd potentially lose the whole dataset. Mind you, it's been around for almost 7 years now without a hitch.

    So I went looking for a commercial solution to do the backups and that's when I realised no-one out there catered to my need - you either had home-level solutions that sat on a desk and took 2 (or if you were lucky, 4) home-level HDDs; or you went for the enterprise-level solutions that required you to have enterprise-level drives. Expensive. I eventually settled on a Netgear ReadyNAS and shoved it full of 3TB HDDs, then had it sync with my home-made NAS RAID box. I can now sleep (more) soundly at night.

    But my initial annoyance still remains - there is nothing for the home hobbyist; you have to choose either from the small home solution or the more expensive (per HDD hosted) enterprise solution. Come-on, storage people - how about an 8 (or more) drive unit (rack-mounted or not, I can handle both) which takes "home" PC HDDs instead of enterprise-level HDDs? I don't care about recovery speed, I don't really care about MTBF (I haven't had a drive failure in forever - my oldest working drive is 14+ years old), I don't even care that much about access time. I *do* want a large-storage box to store my DVD/BR rips, thank you.

    Oh, and somewhere to store my physical DVDs/BRs once they're ripped, but that's another problem for another time.

    (edited for spelling)

    1. John Tserkezis

      Re: Meh.

      "there is nothing for the home hobbyist; you have to choose either from the small home solution or the more expensive (per HDD hosted) enterprise solution."

      What you want is cheap and good at the same time. Face facts, you're not going to get it.

      There is good reason why you either can't, or are not allowed to use desktop drives in arrays. They're not suited to the puprose, especially in RAID arrays. Quality and reliability issues aside, there are behaivoural reasons on what desktop drives do, that is not suitable within an array. They are designed to behave that way, because it suits a desktop environment. This contrasts with RAID arrays that can appear to "randomly" fall over, when you test, and see, there was "nothing wrong" with the drive. Not only that, some behaviours that protect the drive under desktop conditions, can cause the drive to "self destruct" under array conditions.

      You do have some options, such as the WD Red drives. They are physically Green drives (bottom of the barrel desktops), with firmware changes that make it suitable for use within an array. So they won't randomly fall over and still pretend to be OK, but they are ARE shit drives, so will fail (catastrophically) statistically earlier. But given the price, they're plenty good enough for cheap not-so-critical mass data applications.

      Enterprise drives are more than just drives that have the word "enterprise" stampted on it. They ARE better, faster, higher quality, more reliable, and because of all that, more expensive.

      You're going to have to deal with it: Better products cost more, crappy products cost less. There's more to this of course, where choosing a drive for any given array is not like choosing a tyre for a car, it's more complicated, and there are many more issues to look at. And yes, price is one of those.

      1. Neoc

        Re: Meh.

        @John Tserkezis:

        "What you want is cheap and good at the same time. Face facts, you're not going to get it."

        Depends on what your definition of "good" is. Mine is different from an enterprise.

        "You do have some options, such as the WD Red drives. They are physically Green drives (bottom of the barrel desktops), with firmware changes that make it suitable for use within an array. So they won't randomly fall over and still pretend to be OK, but they are ARE shit drives, so will fail (catastrophically) statistically earlier. But given the price, they're plenty good enough for cheap not-so-critical mass data applications."

        That's exactly my my needs are: "cheap not-so-critical mass data applications".

        Point of order, BTW - I use exclusively WD Green drives and my self-built box has 9 of them in a RAID5 array. Never had any dropout, haven't had a failure yet in all those years (I had *one* drive fail the first time I spun it, replaced by the shop under warranty, replacement is still spinning) and, as I said, the oldest has been spinning for 14+ years (except during the odd power failure in the area).

        My experience with those "shit" drives (as you call them) has been very good FOR MY PURPOSE. And that's my point; I am *not* an enterprise - I do not have staff all trying to access storage, I do not need quick access, etc, etc. So far, my 15x WD Greens (in the first array and various PCs) have been spinning without a single hiccup. I have replaced CPUs, but never one of my drives.

        YMMV, but *my* experience with all those drives has been spotless.

        "You're going to have to deal with it: Better products cost more, crappy products cost less. There's more to this of course, where choosing a drive for any given array is not like choosing a tyre for a car, it's more complicated, and there are many more issues to look at. And yes, price is one of those."

        Yet again, you're making my point for me - I DON'T NEED ENTERPRISE-LEVEL RESPONSE/MTBF. And, I have yet to experience to drop-outs you said I should be experiencing with my crappy drives. Why? Because I am a home user with home-user requirements out of the hardware. Which is why I complain about the fact that in order to get more drives per NAS I also have to take a quality/price hit WHICH IS NOT REQUIRED FOR MY PURPOSE.

        Your response to my complaint is EXACTLY why I am complaining in the first place.

    2. Sandtitz Silver badge

      Re: Meh.

      Come-on, storage people - how about an 8 (or more) drive unit (rack-mounted or not, I can handle both) which takes "home" PC HDDs instead of enterprise-level HDDs?

      Synology and QNAP both have models that take can be fitted with dozens of SATA drives.

      1. petur

        Re: Meh.

        They even have USB3 expansion racks to add more storage...

  4. Henry Wertz 1 Gold badge

    Desktop versus enterprise firmware

    Regarding desktop versus enterprise firmware, there are a few differences. The main difference, desktop drives are shipped so when the drive hits a bad sector, they tend to hang attempting to recover the bad portion (if it succeeds it's copied to a spare block); the enterprise drive will almost immediately return a read error, failing to retrieve that block but not hanging the disk. The array controller can then either offline the disk, run the degraded array, and set off lots of alarms; or keep the disk online for speed and set off lots of alarms.

    Regarding disks, I saw one setup where they had like 64 drives racked onto this custom rack thing they'd built, with all these external port multipliers hooking it all into a couple SATA ports on 64-core system. Apparently it worked OK, although I think they were turning everything off to change failed disks.

    Regarding disk interchange, there are enclosures that support both SAS and SATA; the brackets either have a SATA connection, or SAS can be converted to SATA via a plug adapter. Some come with an adapter to plug right into a regular SATA port on the controller (only supporting SATA drives, not SAS, in that case.) They're still expensive though.

    1. Neoc

      Re: Desktop versus enterprise firmware

      @Henry Wertz 1:

      "Regarding disks, I saw one setup where they had like 64 drives racked onto this custom rack thing they'd built, with all these external port multipliers hooking it all into a couple SATA ports on 64-core system."

      Yes, I saw those. The company (whose name I can't remember right now, damn my memory) has even made the plans available to the public, including how to order what were (at the time) made-to-order SATA port multipliers. They will even sell you an empty rack-mount enclosure (since they also designed that from scratch) for you to build your system in.

      Unfortunately, shipping one of those across the Atlantic is too expensive for me. As for using the plans to get one made here in OZ by a local sheet-cutter... would cost even more (I checked). :(

      1. phuzz Silver badge

        Re: Desktop versus enterprise firmware

        You're thinking of Backblaze and their storagepods.

        They've also published some information about their experience of harddrive longevity, and they didn't find any benefit in using enterprise drives instead of consumer ones.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like