back to article Perish the fault! Can your storage array take a bullet AND LIVE?

Storage doesn't have to be hard. It isn't really all that hard. If you ask yourself "can my storage setup lead to data loss" then you have already begun your journey. As a primer, I will attempt to demystify the major technologies in use today to solve that very problem. Certain types of storage technologies (rsync, DFSR) are …

COMMENTS

This topic is closed for new posts.
  1. Destroy All Monsters Silver badge

    RAID 5 shouldn't even be named unless living under the bridge

    BAARF! Enough is enough. You can either join BAARF. Or not.

    Anyway, this is a nice writeup. Gotta study, I still have stuff from 5 or even 10 years ago in my head. It only seems like yesterday....

    Also:

    "SAS drives without an identical SATA model"

    Is this meant to say that the hardware vendor is supposed to have created a separate SAS production series very much unrelated to the SATA production series? Is that even possible these days?

    1. Trevor_Pott Gold badge

      Re: RAID 5 shouldn't even be named unless living under the bridge

      Have a related SATA series all you want, but your SAS drives had damned well better be of superior quality to the SATA drives. If the SATA version of your SAS line is something you are only willing to cover with a 1 year guarantee then I do not have warm fuzzies about the non-marketing-bull MTBF on your SAS line...

      1. Destroy All Monsters Silver badge
        Holmes

        Re: RAID 5 shouldn't even be named unless living under the bridge

        I always assume that the only thing that changes are the 15 cm² of electronics, with the case, motor and disk exactly the same. And maybe not even that. Possibly just the chip that changes. And MAYBE not even that. Maybe just ROM that changes.

        Seriously, are there still separate lines for SAS and SATA these days? I demand proof! I also demand statistical proof that this improves reliability.

    2. JEDIDIAH
      Devil

      Re: RAID 5 shouldn't even be named unless living under the bridge

      RAID5 is only a problem if you are trying to artificially constrain yourself with a system that is itself a single point of failure. At least when a rube thinks like that (all eggs in one basket) they have an excuse as they don't know any better.

      ...and yes, my storage setup can take a bullet and live.

      1. Destroy All Monsters Silver badge
        Facepalm

        Re: RAID 5 shouldn't even be named unless living under the bridge

        > Redundant Array of Disks, Level 5

        > Only a problem if you are not redundant anyway.

        Dude, what?

        1. JEDIDIAH
          Linux

          Re: RAID 5 shouldn't even be named unless living under the bridge

          It helps to make sense of a post if you don't completely mangle it first.

  2. Anonymous Coward
    Anonymous Coward

    Bullet-proof?

    For truly bullet-proof computing I miss the old VMS Clusters. Each processor having independent and simultaneous access to the disks, plus volume shadowing, meant that a properly-configured system had no single point of hardware failure. Famously put to the test in 1996 when the Paris HQ of the Credit Lyonnais Bank burned to the ground; the other half of the cluster 12km away in the suburbs carried on working and literally saved the bank.

    1. Destroy All Monsters Silver badge
      Big Brother

      Re: Bullet-proof?

      > literally saved the bank

      That happened just at the moment the Credit Lyonnais was embroiled in a bizarre case of about a trillion francs français having disappeared / gone to people of the well-connected persuasion with the traces kept on paper only at HQ? But maybe I misremember. Could be.

      Such a tragic accident.

    2. Anonymous Coward
      Anonymous Coward

      Re: Bullet-proof?

      > For truly bullet-proof computing I miss the old VMS Clusters.

      Agreed, they were great systems, properly integrated into the OS as well, not kludged on top like some of the Linux "HA" solutions.

      Closest thing to a VMS cluster today would be Solaris Cluster, much the same type of architecture. Not surprising given the history of some of the guys who work on it.

      1. Matt Bryant Silver badge
        Stop

        Re: AC Re: Bullet-proof?

        ".....Solaris Cluster....." Seriously? You obviously don't know that Veritas reps used to say that Solaris Cluster was the best sales tool they had for selling Veritas Cluster into Solaris environments. Actually less reliable than Microsoft Cluster, RHEL Cluster Suite, in fact just about any clustering tech I have ever met! And the VMS cluster was true active-active, which Sun Cluster is not. Only someone that knew SFA about VMS Clustering would consider Sun Cluster in any way comparable.

        1. Anonymous Coward
          Anonymous Coward

          Re: AC Bullet-proof?

          Matt, I know you love to post anti-Sun propaganda, but this time it is you that knows SFA, as every line of the nonsense above shows. I also know the actual numbers for availability between VCS and SC (the real ones, not the VCS marketing fluff). There are certainly areas where SC hasn''t caught up with VMS clustering, but actove-active isn't one of them. Of course you can run SC active-active. It also has much better virtualizqtion support than VCS. Agreed, Microsoft clustering is comparable, it gets similar reviews, but RHEL HA? Don't make me laugh.

          I've worked with VMS clusters for 10+ years, and Solaris ones for 15+. Please, stick to posting on subjects where you have at least half a clue.

          1. Matt Bryant Silver badge
            Happy

            Re: AC Re: AC Bullet-proof?

            "Matt, I know you love to post anti-Sun propaganda....." Why would I bother, Sun is dead, or didn't you get the memo?

            ".....but this time it is you that knows SFA....." LOL! So if I disagree with your Sunshine I must be lying? As I asked above, why would I? There is no need, Sun is dead and Slowaris is nothing more than a relic. The reason I post verifiable facts is I get very sick of you Sunshiners posting easily debunked guff. Sun is dead, get over it.

            ".....Of course you can run SC active-active....." Not like VMS clustering. In Slowaris Cluster the best you could do is run different apps on different cluster nodes and fail them back an forth, just two or more instances of failover clustering in one cluster, you cannot share an instance actively across two nodes at once. VMS Clustering can use shared memory and is the basis for Oracle RAC - two instances, one application (in RAC's case one database). Trying to pretend Slowaris Cluster could match that is laughable, Slowaris needs Oracle RAC (ie, VMS Cluster tech) to provide the same capability.

            "......I've worked with VMS clusters for 10+ years, and Solaris ones for 15+....." Really? But you seem to know SFA about VMS shared memory clustering? Do you work for a chap called Yen Sid, perchance?

            /SP&L

            1. Phil O'Sophical Silver badge
              Thumb Down

              Re: AC AC Bullet-proof?

              ".....Of course you can run SC active-active....." Not like VMS clustering. In Slowaris Cluster the best you could do is run different apps on different cluster nodes and fail them back an forth, just two or more instances of failover clustering in one cluster, you cannot share an instance actively across two nodes at once.

              Sorry Matt, but you're wrong on this one. Solaris Cluster has supported scalable apps spread across multiple nodes for decades. It started with SPARCcluster and Oracle Parallel Server back in the early 90's, through to Oracle RAC, Apache, web loadsharing and a bunch of other services today. Go read the docs, they're online.

              (incidentally when running RAC in top of Solaris Cluster, RAC delegates cluster membership management to the Solaris software, a far cry from your suggestion that Solaris somehow needs RAC to get the functionality).

              Seriously, when you post this sort of easily-verified BS is it any wonder that people don't take you seriously?

              1. Matt Bryant Silver badge
                Stop

                Re: Phil Re: AC AC Bullet-proof?

                ".....Sorry Matt, but you're wrong on this one. Solaris Cluster has supported scalable apps spread across multiple nodes for decades. It started with SPARCcluster and Oracle Parallel Server back in the early 90's, through to Oracle RAC, Apache, web loadsharing and a bunch of other services today. Go read the docs, they're online.....". No it did not. In all those cases it is the application on top of the cluster providing the application sharing and all SPARC-Slowaris Cluster is doing is proving hardware failover beneath, period. Oracle Parallel Server was the predecessor to RAC, it did the sharing bit, not Slowaris Cluster. Oracle went with the VMS clustering tech in RAC as it was better than OPS. OPS could be run on top of any number of failover clustering technologies, just like RAC. I would suggest it is you that needs to do the reading.

        2. Anonymous Coward
          Anonymous Coward

          Re: AC Bullet-proof?

          Funny you mention Veritas, since your beloved HP couldn't figure out how to port the much better TruCluster and so went crawling to Veritas and begged them for their crappy implementation.

          1. Matt Bryant Silver badge
            WTF?

            Re: AC Re: AC Bullet-proof?

            "Funny you mention Veritas, since your beloved HP couldn't figure out how to port the much better TruCluster...." The reason hp didn't bother porting Tru64 and associated clustering to Itanium or adding the features into hp-ux was because us customers said we weren't bothered with it, we were happier with hp-ux and Serviceguard clustering, the latter having a much bigger market share than Trucluster. Porting from Tru64 UNIX to hp-ux was a pretty simple job, much simpler than migrating VMS users to hp-ux, and VMS survived because it had a big market share and VMS Clustering had unique features.

            "........so went crawling to Veritas and begged them for their crappy implementation." Never heard of Serviceguard, existed long before the Compaq purchase? I think you're confusing hp's use of VxFS with VCS. Do we need to welcome you as someone new to the industry or do you just know nothing about hp-ux?

      2. Alan Brown Silver badge

        Re: Bullet-proof?

        "Closest thing to a VMS cluster today would be..."

        ... A VMS cluster.

        You can still get them. (although our VMS setup is standalone)

  3. Sandtitz Silver badge
    Go

    Literally bulletproof storage

    "Can your storage array take a bullet AND LIVE?"

    Sure, ask HP. http://www.youtube.com/watch?v=Gnjb1WVkhmU

    1. M. B.

      Re: Literally bulletproof storage

      I guess Hitachi can too then by default!

      Also, the article mentioned the resiliency of the VMware VSA but didn't mention HP's StoreVirtual VSA (running LeftHand OS 10) which I would argue is the best of the bunch. We've been testing it here on some old servers for lab purposes and it works really quite well if you have a couple NICs to spare.

      1. Trevor_Pott Gold badge

        Re: Literally bulletproof storage

        I didn't mention StoreVirtual because I have never had the opportunity to play with it or even see a demo. It's on my list.

  4. Grikath

    in short..

    You require a belt, suspenders, and preferably a nail-gun to be absolutely "safe" .

    1. Anonymous Coward
      Anonymous Coward

      Re: in short..

      I think you mean "braces", not "suspenders".

      Suggesting otherwise on a UK website raises some disturbing questions...

      1. Blane Bramble

        Re: in short..

        ... or will attract politicians...

      2. Grikath
        Joke

        Re: in short..

        I abase myself for blatant and abrasive use of an americanism, Oh mighty AC.

        My english teacher shall be brought from retirement and flogged unto death for teaching me this phrase all those decades ago, and the school shall be burned down, and the grounds sown with salt to appease the mighty hordes of affronted grammar nazis.

        Meanwhile I shall refrain from posting in the majestic language of the British Isles, and post in my native language, which will doubtlessly be flawlessly translated into proper Queens' English by such worthies as Google and Bing.

        1. Trevor_Pott Gold badge

          Re: in short..

          Use the Queen's proper English, strong and free. Canadian, eh?

        2. Ken Hagan Gold badge
          Coat

          Re: in short..

          "flawlessly translated into proper Queens' English"

          Would this be the wrong moment to point out that the apostrophe is in the wrong place?

      3. Anonymous Coward
        Anonymous Coward

        Re: in short..

        Braces, no way, I find by wearing suspenders the support guys get around to me so much more quickly.

  5. Anonymous Coward
    Anonymous Coward

    We run 2 x identical Windows 2008 R2 hyper-V storage servers with 600GB Intel SSDs in them in our organization. Basically mirroring each other each night.

    The intel SSDs are rock solid, there is no complicated setups and the file sync each night is lightening fast. We also backup the entire VM each weekend.

    Fail safe data on a very limited budget :)

    1. Anonymous Coward
      Anonymous Coward

      Intel rock solid?

      Just had a 6-month old 180GB 330 die on me, not even one of the most sollicited ones.

    2. pPPPP

      So you rsync your data once a day. What happens when your production SSD fails a few seconds before that backup starts? Your RPO is 24 hours. Most organisations wouldn't put up with that.

      1. Trevor_Pott Gold badge

        I think different tiers of data can sustain different RPO. With something like Storage Profiles in VMware that can be made easy. I do not, for example, care overmuch if my webservers get reverted to yesterday; they grab their info from a centralized storage location with is disaggregated from the individual VMDK of the PaaS VM itself.

        You just gave me a great idea for an article. Much appreciated.

  6. Phil W
    Joke

    Response to the headline..

    I'm going to pretend not to have read the actual article and simply respond with the following to the headline.....

    Surely it depends on the caliber of the bullet, and where the bullet hits?

    If you have for instance a 32 disk array, and you fire a .50 cal round diagonally through from one corner to another you're likely to take out between 1/4 and 1/3 of the disks in physical damage.

    Not to mention hitting the power supply which may cause voltage spikes destroying more disks.

    On the other hand a .22 cal fired straight through the front will take out one or two disks and probably no power so you'll likely be ok.

    Plus many variations on the above.

    Do tank shells count as bullets?

  7. Lord Elpuss Silver badge
    Happy

    Great article

    I'm not a sysadmin, or involved in any way with designing storage (apart from selling it back in my salesman days) - and from a technology standpoint this stuff is way over my head.

    Remarkable then that I find this fascinating; Trevor, you have that rare ability to combine technical expertise with an informative writing style. So much so that I now read every article you write, regardless of the topic, on the basis that it will almost certainly be very interesting.

    1. Trevor_Pott Gold badge
      Pint

      Re: Great article

      \o/

    2. BlueGreen

      Re: Great article

      agreed, and lots of supportive links. I wish I had time to do them justice.

      I'd like to quibble over: "never, ever, under any circumstances, lie to ZFS."

      I presume this refers to synchronous flushes. Even having local disks doesn't guarantee sync flushes being honoured, even if they advertise as such. Home disks may well not, even server disks were known to have problems, and raid cards (I think dell was known for it) claimed to do so but didn't. I think things are a lot better now than before but I guess you need to define "lie" rather precisely. MS's HCL might be interesting, and checking out logging on MSSQL for a description of why syn writes matter.

      Anyway, good article.

      1. Trevor_Pott Gold badge

        Re: Great article

        More than just flushes; serious, click the link on that. (Or rather, it is about flushes, but it really gets in to how ZFS does them and what mechanisms it can use if it "owns" the disk. Also how to configure ZFS so that the damned thing works. It's a truly great link.)

        Also: I cannot claim complete credit on things like links. I have a great research team to back me up. It helps to have additional eyes to check things over.

  8. Anonymous Coward
    Anonymous Coward

    Hear hear!

    Raid is dead, long live ZFS. Thanks for bringing this topic to the fore Trevor.

    Now if only all those raid storage vendors stopped sitting on their hands and left the obsolete past behind, we'd greatly improve data integrity.

    Plus ZFS is not even difficult to use, it is easy even. And available on linux. Why it is not widely used is beyond me.

    1. Anonymous Coward
      Anonymous Coward

      Re: Hear hear!

      > Plus ZFS is not even difficult to use, it is easy even. And available on linux. Why it is not widely used is beyond me.

      Because Oracle have no intention of releasing it under GPL, the plan is to add all its features into Btrfs.

      1. Alan Brown Silver badge

        Re: Hear hear!

        That plan is a _long_ way behind the implementation of ZFS as a native linux filesystem - and it still only covers a fraction of what ZFS does.

    2. Matt Bryant Silver badge
      FAIL

      Re: Hear hear!

      OMFG, Kebbabfart has learnt English!

      "Raid is dead, long live ZFS......" I can cluster RAIDed disks, I cannot cluster ZFS. RAID is far from dead, but from a serious HA viewpoint ZFS is stillborn as it cannot cluster.

      "...... Why it is not widely used is beyond me." Obviously a lot is beyond you. Let's start with the simple fact RAID can be implemented in hardware so less of a CPU cycles stealer than ZFS. Then let's look at how ZFS is software and just as buggy as any other Sun software, whereas RAID is tried and tested and trusted. After that we have the fact that RAID works across so many more OSs than ZFS. Then we have the problem ZFS inherited from WAFL - as the file system fills up ZFS slows down and chews more CPU. ZFS for a Slowaris desktop might be a good idea, anywhere else it fails miserably. I'm really getting tired of the Sunshiners that keep pushing it as some amazing panacea to all computing ills.

      1. Alan Brown Silver badge

        Re: Hear hear!

        "I can cluster RAIDed disks, I cannot cluster ZFS"

        SAM-QFS. Glusterfs.

        *Ahem*

        If you're sucking on the GFS koolaid be warned it's the _least_ reliable part of our entire HA setup (We don't even dare breath hard on it. Standalone machines would have higher aggregate uptimes)

        1. Matt Bryant Silver badge
          WTF?

          Re: Alan Brown Re: Hear hear!

          Thumbs up for mentioning GlusterFS, not so sure why you brought SAM-QFS (hierarchial archiving) into the conversation? Distributed file systems seem to be the way to go.

          "......If you're sucking on the GFS koolaid....." Que? Been using RHEL with GFS and Oracle RAC to replace old UNIX systems since about 2006 and it has been quite stable, I think some of it we didn't even bother upgrading to GFS2 as it has been so reliable. Are you talking about with different apps, maybe MySQL?

          1. Anonymous Coward
            Anonymous Coward

            Re: Alan Brown Hear hear!

            > not so sure why you brought SAM-QFS (hierarchial archiving) into the conversation? Distributed file systems seem to be the way to go.

            You mean like the Shared-QFS distributed file system? Matt, do you ever drink anyting but HP KoolAid?

            <shakes head sadly>

            1. Matt Bryant Silver badge
              FAIL

              Re: AC Re: Alan Brown Hear hear!

              ".....You mean like the Shared-QFS distributed file system?....." As I understand it, production use of QFS is about as common as the proverbial hen's teeth, seeing as it is pretty much crap compared to a dozen better open source alternatives, and the only use you're likely to (rarely) come across is SAM-QFS in archive solutions. Oh, and IIRC, it isn't even original Sun work, being a product they borged and killed with their usual ineptitude. Please do supply a list of Fortune 1000 companies using it for their production billing, trading or CRM servers' file systems.

              ".....do you ever drink anyting but HP KoolAid?....." It is very obvious your KoolAid is not only very stale but also only drank in ivory towers far from the realities of enterprise computing. Thanks for sharing your "unique" insight, it provided much humour, but I think I'd happily take even the original GFS or Gluster over QFS any day of the week! Oh, and BTW - neither Gluster or GFS are from HP - duh!

      2. Anonymous Coward
        Anonymous Coward

        @ Bryant

        English is not my first language.

        When you'll be able to post on a foreign language technical website, get back to me. I doubt you can because those who do are usually more open-minded and considerate than you appear to be.

        In the meantime, pls abstain from personal attacks, this can only make your points more valid, and it will also make you grow.

        Regarding your points, CPU is cheap, data integrity is primordial. Raid is much more mature than ZFS of course, but where your data matters to the last bit, raid is of no help, regardless how many clusters you may have.

        Lastly, I am no "sunshiner" (never worked for sun or managed sun gear) nor a fanboi or any other label you may want to stick to people in order to demean them.

        Also, writing slowlaris does demean you.

        1. Matt Bryant Silver badge
          Happy

          Re: @ Bryant

          Aw, wasn't that just too precious?

          "English is not my first language....." I applaud your educational achievement. The joke was more aimed at Kebbie who has a habit of lapsing into some quite comical, pidgin English when he gets excited about Sun.

          "......When you'll be able to post on a foreign language technical website....." Apart from the fact that you have no idea of my linguistic capabilities, the simple fact is I wouldn't need to anyway as English is the international language of IT, as you prove by you having learned it and come here to an English techie website. Hats off to our Yankee, ex-colonial brethren for making it so.

          "......In the meantime, pls abstain from personal attacks, this can only make your points more valid, and it will also make you grow....." Grow very boring you mean? I notice that you cannot disprove the technical points I made in between the "personal attacks" - were you too upset to counter them? Maybe you need to grow a thicker skin, or maybe it was just easier for you to rail about "personal attacks" rather than deal with technical facts?

          ".....CPU is cheap, data integrity is primordial....." Hmmmm, it seems you think primordial means something other than from the earliest era of creation. Did you mean of primary importance? If so, then how much more important could it be that you do not lose access to your data through a single point of failure like ZFS?

          1. Anonymous Coward
            Anonymous Coward

            Re: @ Bryant

            Matt, you seem to suffer from the same personality deficiencies as Eadon.

            Unwarranted personal attacks as soon as someone disagrees with your statements, even more aggressively personal when such people don't immediately back down instead of simply bringing facts and evidence to the discussion which is otherwise just based on what you assert, no ability to debate an issue purely on merit (and maybe learn something) and an opinion of yourself apparently so high that your ego must surely have its own social security number by now.

            Try first to ensure the correct context. Then discuss the *facts*. Leave off any assertions about the person's knowledge, skills or even affiliation because you truly have no clue who you are dealing with and what their expertise is. Last but not least, drop the language arrogance. A language is not just vocabulary and grammar, it's a way of thinking. More languages means access to other thinking structures, which makes you smarter and intellectually more flexible. It may also help you mature and discover what value basic courtesy, politeness and diplomacy can bring to your life. Learn to control that ego, soon.

            1. Matt Bryant Silver badge
              FAIL

              Re: AC Re: @ Bryant

              "Matt, you seem to suffer from the same personality deficiencies as Eadon....." Ooh, look - a personal attack because they can't counter the technical facts. Pot, meet kettle.

              ".....Unwarranted personal attacks as soon as someone disagrees with your statements...." Not only did you open with a personal attack, you have not even made the effort to attempt to disagree.

              ".....instead of simply bringing facts and evidence to the discussion....." In your totally fact-free post, you mean? All that blather and attempts at moral superiority and nothing that could even pass as a shadow of a technical or industry related fact in your whole post. Like I said, if you think your input is so valuable, please do discuss some facts and post details of the Fortune 1000 companies using QFS. Shall I cue the tumbleweed whilst we wait for your response?

              1. Anonymous Coward
                Anonymous Coward

                Re: AC @ Bryant

                QED..

                1. Matt Bryant Silver badge
                  Happy

                  Re: AC Re: AC @ Bryant

                  "QED.." Running up the white flag so early? Come on, can we at least have some form of pretence at industry knowledge, maybe some whimsical case study on QFS as used by a third-rate university in Outer Mongolia? Better be quick, Oracle doesn't seem to have much interest in pushing QFS as shown by this statement on the SAM-QFS webpage (http://hub.opensolaris.org/bin/view/Project+samqfs/WebHome):

                  ATTENTION: This website and all services within the opensolaris.org domain will be unavailable after March 24, 2013.

                  Yes, Oracle is "decommissioning" the whole opensolrais.org website. It will be mildly interesting to see if they bother moving the SAM-QFS page or just let it slide into obscurity in the process. After all, it's a surefire sign Oracle has no interest in a product when they open source something, as they have done with SAM-QFS. Enjoy!

                  /SP&L

  9. Sir Alien
    Unhappy

    Storage shouldn't be complicated.... but...

    This article in quite a few area I just nod in agreement. RAID 5 should really never be used anymore since you are just shooting yourself in the foot but for some systems that don't require 100% uptime and only deal with a few small disk (300GB or less) then it's not the worst way you can make an array. But since many arrays these days are large RAID 5 (or even 6) is now a problem.

    Personally I use ZFS (or any checksum based filesystem like btrfs when stable) since to me data integrity matters.

    The problem though in the corporate world is the lack of either understanding, trust or both. My area of work (their name will remain anonymous) are still using in many systems for even busy IO intensive servers, RAID 1

    As an example where you would explain to your boss that RAID 1 (or any RAID to be honest) will not protect you from corrupt data since if something is corrupted by something like software on one disk the other is already corrupted and unrecoverable. Now imagine your boss thinks he is always right and says you can recover the data of the second drive if using software RAID. On top of this a RAID1 pair running a server with 20+ virtual machines on it.

    This example is not uncommon in some businesses and you may have the understanding but you are not the person to convince. As the subject, storage shouldn't be complicated but when the people in charge don't or refuse to understand then it becomes the most complicated system ever.

    This article puts a great perspective on how easy storage should be but in reality it's not

    - SirA

    1. Dazed and Confused

      Re: Storage shouldn't be complicated.... but...

      > Personally I use ZFS (or any checksum based filesystem like btrfs when stable) since to me data integrity matters.

      Surely no one should be doing checksums in SW, that's just criminally insane, as bad as people doing SW RAID[2-6]. If ever there was a job crying out for a HW solution then that's got to be one. Anyone would think DMA had never been invented.

      1. Sir Alien

        Re: Storage shouldn't be complicated.... but...

        This is true, if the checksumming can be done in hardware then it takes the stress away from the server using it (and other advantages) but checksumming data in software on the host is at least better than not checksumming at all.

  10. Wanda Lust

    Nail on head

    You hit the nail on the head in your opening paragraph, effectively posing the question: "can your infrastructure survive failure".

    My day job exposes me to many IT shops who simply cannot answer that positively, it's really scary how many. They neglect to ask themselves, "what happens when 'x' fails?", and plan for the inevitable.

    Few take an integrated, systems and services focussed approach. In regard to the rest of the article, I'd have liked to have read more more about what makes a storage system, be that block or file. In my book an array is isn't a system, it's just a pile of rust externally connected to a server system whereas a storage system has to be something that adds some considerable value to those little modules of rust: e.g. data integrity on top of the hardware redundancy, local & remote replication, efficiency, performance management, to name a few.

    Keep these articles coming, they make for genuine discussion and an interesting distraction from the vendor fodder that has to make up most of El Reg's output.

This topic is closed for new posts.