Storage Forum

This topic was created by Chris Mellor 1 .

COMMENTS

This topic is closed for new posts.
  1. Chris Mellor 1

    Storage Forum

    This is the Register Storage Forum for the discussion of any and all computing storage issues.
    1. Annihilator
      Paris Hilton

      Is it?

      Is it a forum, or merely a topic?..

      1. Anonymous Coward
        Anonymous Coward

        Re: Is it?

        Well - today it is merely a topic. But we will organise the user forum by section - and this will include a storage forum.

        Hope that is clear!

    2. Sean Baggaley 1

      What's to discuss?

      Surely people store computers on desks, shelves, etc., just like everything else?

      (What? Oh! Right... never mind.)

      1. Anonymous Coward
        Anonymous Coward

        Re: What's to discuss?

        Many years ago, I read an article in a telecoms networking magazine about asynchronous transfer mode. Halfway through the article the author switched to writing about cash machines (ATMs).

        I also had a junior beat reporter pitch me a "storage" story. Yes it was about warehouses, and yes that reporter is now working in PR.

        1. BristolBachelor Gold badge
          Thumb Up

          After having a... um... look around the network at school, I was made to write an essay on the subject of security and why it is needed. Obviously I didn't want to get kicked out of my GCEs, so I did it, but not wanting to disappoint, I did a nice piece about mortgages and how the property acts as security for the loan in case of default :)

  2. This post has been deleted by its author

  3. Trevor_Pott Gold badge

    Dual cans of RAID.

    Several years ago, ZDnet ran an excellent piece; Why RAID 5 stops working in 2009

    I posit that the advent of the 4TB hard drive has in fact done for RAID 6. I am unsure that ZFS is ready to pick up the slack, and Microsoft's promised new storage tech in Windows 8 doesn't even do dual disk redundancy!

    Have we really no alternative to RAID 6 excepting RAID 61? Are some of the proprietary methods the solution to our woes?

    Your thoughts and insights are appreciated, fellow commenttards.

    1. Ammaross Danan
      Coat

      Sad News

      The sad news is, RAID5 (and now default RAID6) is the cheapest way to maximize storage space with as few disks as possible. (10x2TB disks in RAID6 gives more space than 10x2TB in RAID10 for instance.) Unfortunately, RAID10 is in the same (but slightly better-but-worse off) position as RAID6. To rebuild a RAID10 failed disk, the system only needs to access a single disk and copy it verbatim. However, with an unrecoverable read error on the single disk (less likely than would happen in a RAID6 btw, do the math!), you end up in the same pecarious boat as RAID6, with less disk space, but better performance.

      Now, this segways into the next point to take: SSDs bottleneck RAID cards in any form of checksum (RAID5-ish) setup. Now, with how the firmware for SSDs are designed, a URE is extremely less likely compared to a whole-disk failure, thus proving them likely more ideal in a striping environment (also considering that in a mirroring setup, each drive would theoretically have the same exact writes and wear, assuming the controller doesn't use an RNG for allocation or cleanup, thus suggesting near-same failure times, which isn't seen as often due to varying endurance for each cell). Side note would be that the drive would fail in read-only mode, thus a "custom" RAID solution could use it to rebuild most/all of the replacement disk....but I digress. Disk storage will still be the leader when it comes to shear data storage, thus the need to address resiliency in RAID. An interesting thought/solution would be a RAID0+1+1, which would scale slightly worse (available space-wise) than RAID61, have 1disk less worse-case resiliency, but a lot better rebuild performance and rebuild resiliency (as the first RAID0 would work at full speed and the second would be used for mirrored reads (think read striping across mirrored disks as well as across the stripe), and the one disk that is the second mirror to the disk that failed can be soley dedicated to restoring the missing drive. (Don't bring in the primary, as it would cause head-thrashing on the target rebuild disk since 2:1 read:write would only work if it was appropriately striped reads). A single URE would be protected against due to the dual mirror remaining, or if you're unlucky single disk remaining. Any involvement of checksumming would unfortunately hobble any SSD implementation of such an array, but for disks, it could be acceptible. The caveat for a RAID011 is that it has worst-case resiliency on par with a triple-checksum stripe (RAID7 I believe). It just has the benefits of having low processing overhead and near RAID0 performance even in "degraded" mode, which could be more important than the lower resiliency in some/most cases.

      Any other thoughts or flames?

      1. This post has been deleted by its author

  4. BristolBachelor Gold badge

    I think it also depends on what the RAID is for. I am generally happy with single disk performance for our small home business, and so I use RAID5 for resiliancy and to hide issues of which disk what is on.

    For me non-stripped with 1 or 2 global parity drives would actually do. The parity drives could always be SSD to solve the bottleneck issue that caused RAID4 to change to RAID5. In a catastrophic failure case, the non-failed disks still have useful data if the catalogue is duplicated. Also disks can be spun down and left switched off until the data on them is needed, a bit like nearline and offline storage.

    There is a product a bit like this built on Linux (Limewire?) but it didn't really look right for me.

    As for SSDs, I think if you look at failures, most (almost all?) of the failures so far have been total loss failures, not the failure of a few cells. Yes, the wearout mecanism is the failure of a few cells, but the actual failures being seen are not.

This topic is closed for new posts.