back to article Researchers reveal radical RAID rethink

Singaporean researchers have proposed a new way to protect the integrity of data in distributed storage systems and say their “RapidRAID” system offers top protection while consuming fewer network, computing and storage array resources than other approaches. RAID – redundant arrays of inexpensive disks – has been a storage …

COMMENTS

This topic is closed for new posts.
  1. A Non e-mouse Silver badge

    If RapidRAID is based on nodes, how is it going to help the 90% of people who just want to protect the data on a dozen HDs, and don't want to build their own Googleplex ?

  2. Robert Carnegie Silver badge

    RAID != inexpensive

    When RAID was commercialised, there were several good reasons to switch "inexpensive" for "independent". In your world do people still say "inexpensive"?

    There is an attraction in the idea of providing a reliable system by treating hard disk drives as cheap disposable devices that pop like lightbulbs - or like valves in a Bletchley Park computer - but if it ever does fail, you don't want your name and the word "inexpensive" to be on the contract or the order.

    Being connected together, they're not really independent either. But it says they are, and that covers your back.

    1. Trygve Henriksen

      Re: RAID != inexpensive

      Yes, I = Inexpensive.

      That is, compared to the alternative...

      The first RAID I set up was a RAID5 of 5 x 1.3GB SCSI drives.

      (I guess that pretty much dates me...)

      you really don't want to know what a single HDD of that capacity would have cost, if it was even available...

      1. Captain Save-a-ho

        Re: RAID != inexpensive

        Only in the IT world would such claims date you. When you're ready to start talking MFM and RLL controllers, let's talk (or punch cards!).

  3. An0n C0w4rd

    The real reason behind the problems with RAID and larger disk sizes isn't IOPS or anything else, but the uncorrected read error rate. To rebuild a RAID 5 array after a failure you have to read all the data from all the surviving members of the array. The larger the disk, the more likely you are to hit an uncorrected read error which will silently corrupt the reconstructed data.

    This is why ZFS does block based checksums, specifically to catch this kind of problem

    NetApp gets around it by using non-standard hard drives. NetApp drives have a 520 byte sector, and several (or all, I forget) of those extra bytes are for a checksum.

  4. Anonymous Coward
    Anonymous Coward

    Good points An0n C0w4rd

    The thing which makes ZFS very robust, is all data storage and data transfers are checksummed, so that it doesn't matter where the data gets corrupted, you know about it, and promptly.

    I wonder if RaidRAID does comprehensive checksumming; if not, I wouldn't trust it.

    I had a disk failure on a ZRAID2 5 disk array, lost nothing, and didn't notice any performance issues during a 56 hour resilver, for the replaced 2TB disk.

This topic is closed for new posts.

Other stories you might like