back to article Ten four-bay NAS boxes

The storage needs of home users are ever growing, such that the capacity of dual-layer DVDs appears miniscule, and backing up CD looks desperate. Many are now turning to network accessible storage systems that not only allow data storage in the home, but also provide HTTP, FTP and cloud services for when you’re on the go. The …

COMMENTS

This topic is closed for new posts.
                        1. Kebabbert

                          Re: @Matt

                          Matt,

                          No, ZFS can't cluster. This is actually a claim of yours that happens to be correct, for once. Non clustering is a disadvantage, and if you need clustering, then ZFS can not help you. But you can tack distributed filesystems on top ZFS, such as, Lustre or OpenAFS.

                          .

                          .

                          "...More evasion. I pointed out that not one vendor has dropped expensive Veritas for "free" ZFS and all you do is go off on a tangent. Just admit it and then go stick your head back up McNealy's rectum..."

                          Well, I admit that I dont know anything about your claim. But you are sure on this, I suppose, otherwise you would not be rude. Or maybe you would be rude, even without knowing what you claim?

                          But there are other examples of companies and organizations switching to ZFS. For instance, CERN. Another heavy filesystem user of large data is IBM. I know that the IBM new supercomputer Sequioa will use Lustre ontop ZFS, instead of ext3 because of ext3 short comings:

                          http://www.youtube.com/watch?v=c5ASf53v4lI (2min30sec)

                          http://zfsonlinux.org/docs/LUG11_ZFS_on_Linux_for_Lustre.pdf

                          At 2:50 he says that "fsck" only checks metadata, but never the actual data. But ZFS checks both. And he says that "everything is built around data integrity in ZFS".

                          If you google a bit, there are many requests from companies migrating from Veritas to ZFS. Here is one company that migrated to ZFS without any problems.

                          http://my-online-log.com/tech/archives/361

                          .

                          .

                          "...Oooh, [Nexenta] a tier 3 storage maker! Impressive - not!..."

                          Why is this not impressive? Nexenta competes with NetApp and EMC having similar servers, faster but cheaper. Why do you consider NetApp and EMC "not impressive"?

                          .

                          .

                          "...More evasion. I asked you for one server vendor that has dropped Veritas for ZFS and the answer is NONE..."

                          What is your point? ZFS is proprietary and Oracle owns it. Do you mean that IBM or HP or some other vendor, must switch from Veritas to ZFS to make you happy? What are you trying to say? I dont know of any vendor, but I have not checked. Have you checked?

                          .

                          .

                          "...You failed AGAIN to answer the point and pretend that naming cheapo, tier 3 storage players is an answer. It's not. Usual fail. Maybe before you do your next (pointless) degree you should do a GCSE in basic English..."

                          I agree that my English could be better, but as I tried to explain to you, English is not my first language. BTW, how many languages do you speak, and at which level?

                          Speaking of evading questions, can you answer my question? Have you ever noticed random bit flips in RAM which has triggered ECC error correcting mechanism in RAM? No? So, just because you have never seen it (because you have not checked for it) that means ECC RAM is not necessary? I mean, users of big data, such as Amazon cloud says that there are random bit flips all the time, in RAM, on disks, etc. Everywhere. But you have never seen any, I understand. I understand you dont trust me, when I say that my old VHS cassettes deterioate because of the data begins to rot after a few years. This also happens to disks, of course.

                          So, I have answered your question on which vendors have seized Oracle proprietary tech: I havent checked. Probably they dont want to get sued by Oracle.

                          Can you answer my question? Do you understand the need for ECC RAM in servers?

                          1. Matt Bryant Silver badge
                            Facepalm

                            Re: Re: @Matt

                            "No, ZFS can't cluster....." FINALLY! One of the Sunshiners has finally admitted a simple problem with ZFS! Quick, call the press! Oh, hold on a sec, it doesn't seem to have stopped him from spewing another couple of terrawads of dribbling.

                            "....Why is this not impressive? Nexenta competes with NetApp and EMC...." If I stick FreeNAS on an old desktop and hawk it on eBay am I "competing with EMC"?

                            ".....What is your point? ZFS is proprietary and Oracle owns it. Do you mean that IBM or HP or some other vendor, must switch from Veritas to ZFS to make you happy?...." Both hp and IBM are a good case in point. Both pay license fees to Symantec to use their proprietary LVM for their filesystems. If ZFS was so goshdarnwonderful as you say, and "free" to boot, surely hp or IBM would be falling over themselves to use ZFS? They aren't. Indeed, corporate users of SPARC-Slowaris still use Veritas for their filesystems rather than ZFS. There is a reason - ZFS is not as good as you think and there are other options, especially on Linux, that are far superior. So for you to come on here and blindly preach on about ZFS as if it is perfection is just going to get you slapped down by those in the know.

                            ".....Do you understand the need for ECC RAM in servers?" Completely irrellevant to the point in hand. It's like saying "oh, you have house insurance, therefore you must have ZFS!" No, I have house insurance because there is a realistic chance that I will need it, unlike ZFS. There is a demonstratable case for ECC RAM. There is not for ZFS, despite what you claim.

                            1. Kebabbert

                              Re: @Matt

                              I dont understand your excitement of me confirming that ZFS does not cluster? Everybody knows it, Sun explained ZFS does not cluster, Oracle confirms it, and everybody says so, including me. You know that I always try to back up my claims with credible links to research papers / benchmarks / etc, and there are no links that say ZFS does cluster - because it does not. Therefore I can not claim that ZFS does cluster.

                              Are you trying to imply that I can not admit that ZFS is not perfect, that is has flaws? Why? I never had any problems looking at benchmarks superior to Sun/Oracle and confirming that, for instance, that POWER7 is the fastest cpu today on some benches. I have written it repeatedly, POWER7 is a very good cpu, one of the best. You know that I have said so, several times. I have confirmed superior IBM benchmarks, without any problems.

                              Of course ZFS has its flaws, it is not perfect, nor 100% bullet proof. It has its bugs, all complex software has bugs. You can still corrupt data with ZFS, in some weird circumstances. But the thing is, ZFS is built for safety and data integrity. Everything else is secondary. ZFS does checksum calculations on everything, that drags down performance, which means performance is secondary to data integrity. Linux filesystems tend to sacrifice safety to performance. As ext4 creator Ted Tso explained, Linux hackers sacrifice safety to performance:

                              http://phoronix.com/forums/showthread.php?36507-Large-HDD-SSD-Linux-2.6.38-File-System-Comparison&p=181904#post181904

                              "In the case of reiserfs, Chris Mason submitted a patch 4 years ago to turn on barriers by default, but Hans Reiser vetoed it. Apparently, to Hans, winning the benchmark demolition derby was more important than his user's data. (It's a sad fact that sometimes the desire to win benchmark competition will cause developers to cheat, sometimes at the expense of their users.)...We tried to get the default changed in ext3, but it was overruled by Andrew Morton, on the grounds that it would represent a big performance loss, and he didn't think the corruption happened all that often (!!!!!) --- despite the fact that Chris Mason had developed a python program that would reliably corrupt an ext3 file system if you ran it and then pulled the power plug "

                              I rely on research and official benchmarks and other credible links when I say something. Scholars and researchers do so. You, OTOH, do not. I have showed you several research papers - and you reject them all. To me, an academic, that is a very strange mindset. How can you reject all the research on the subject? If you do, then you can as well as rely on religion and other non verifiable arbitrary stuff, such as Healing, Homeopathy, etc. That is a truly weird charlatan mindset: "No, I believe that data corruption does not occur in big data, I choose to believe so. And I reject all research on the matter". Come on, are you serious? Do you really reject research and rely on religion instead? I am really curious. O_o

                              So yes, ZFS does not cluster. If you google a bit, you will find old ZFS posts where I explain that one of the drawbacks of ZFS is that it doesnt cluster. It is no secret. I have never seen you admit that Sun/Oracle has some superior tech, or seen you admit that HP tech has flaws? On my last job, people said that HP OpenVMS was superior to Solaris, and some Unix sysadmins said that HP Unix was the most stable Unix, more stable than Solaris. I have no problems on citing others when HP/IBM/etc is better than Sun/Oracle. Have you ever admitted that Sun/Oracle did something better than HP? No? Why are you trying to make it look like I can not admit that ZFS has its flaws? Really strange....

                              .

                              .

                              "...If I stick FreeNAS on an old desktop and hawk it on eBay am I "competing with EMC"?..." No, I dont understand this. What are you trying to say? That Nexenta is on par with FreeNAS DIY stuff? In that case, it is understandable that you believe so. But if you study the matter a bit, Nexenta beats EMC and NetApp in many cases, and Nexenta has grown triple digit since its start. It is the fastest growing startup. Ever.

                              http://www.theregister.co.uk/2011/09/20/nexenta_vmworld_2011/

                              http://www.theregister.co.uk/2012/06/05/nexenta_exabytes/

                              http://www.theregister.co.uk/2011/03/04/nexenta_fastest_growing_storage_start/

                              Thus, FreeNAS PC can not compete with EMC, but Nexenta can. And does. Just read the articles or will you reject the facts, again?

                              .

                              .

                              "...Both hp and IBM are a good case in point. Both pay license fees to Symantec to use their proprietary LVM for their filesystems. If ZFS was so goshdarnwonderful as you say, and "free" to boot, surely hp or IBM would be falling over themselves to use ZFS? They aren't. ..."

                              Well, DTrace is another Solaris tech that is also good. IBM has not licensed DTrace, nor has HP. What does that prove? That DTrace sucks? No. Thus, your conclusion is wrong: "If HP and IBM does not license ZFS it must mean that ZFS is not good" - is wrong because HP and IBM has not licensed DTrace.

                              IBM AIX has cloned DTrace and calls it Probevue

                              Linus has cloned DTrace and calls it Systemtap

                              FreeBSD has ported DTrace

                              Mac OS X has ported DTrace

                              QNX has ported DTrace

                              VMware has cloned DTrace and calls it vProbes (gives credit to DTrace)

                              NetApp has talked about porting DTrace on several blogs

                              Look at this list. Nor HP nor IBM has licensed DTrace, does that mean DTrace sucks? No. Wrong conclusion of you. DTrace is the best tool to instrument the system, and everybody wants it. It is best. Same with ZFS.

                              .

                              .

                              "...There is a reason - ZFS is not as good as you think and there are other options, especially on Linux, that are far superior..." Fine, care to tell us more about those options that are far superior to ZFS? What would that be? BTRFS, that does not even allow raid-6 yet? Or was it raid-5? Have you read the mail lists on BTRFS? Horrible stories of data corruption all the time. Some Linux hackers even called it "broken by design". Havent you read this link? Want to see? Just ask me, and I will post it.

                              So, care to tell us the many superior Linux ZFS options? A storage expert explains that Linux does not scale I/O wise, and you need to use real Unix: "My advice is that Linux file systems are probably okay in the tens of terabytes, but don't try to do hundreds of terabytes or more."

                              http://www.enterprisestorageforum.com/technology/features/article.php/3745996/Linux-File-Systems-You-Get-What-You-Pay-For.htm

                              http://www.enterprisestorageforum.com/technology/features/article.php/3749926/Linux-File-Systems-Ready-for-the-Future.htm

                              .

                              .

                              "...There is a demonstratable case for ECC RAM. There is not for ZFS, despite what you claim...."

                              Fine, but have you ever noticed ECC firing? Have you ever seen it happen? No? Have you ever seen SILENT corruption? Hint, it is not detectable. Have you seen it?

                              Have you read experts on big data? I posted several links, from NetApp, Amazon, CERN, researchers, etc. Do you reject all those links that confirm that data corruption is a big problem if you go up in scale? Of course, when you toy with your 12TB hardware raid setups, you will never notice it. Especially as hw-raid is not designed to catch data corruption. Nor SMART does help. Just read the research papers. Or do you reject Amazon, CERN and NetApp and all researchers? What is it you know, that they dont know? Why dont you tell NetApp that their big study on 1.5 million Harddisks did not see any data corruption at all? They just imagined the data corruption?

                              http://research.cs.wisc.edu/adsl/Publications/latent-sigmetrics07.pdf

                              "A real life study of 1.5 million HDDs in the NetApp database found that on average 1 in 90 SATA drives will have silent corruption which is not caught by hardware RAID verification process; for a RAID-5 system that works out to one undetected error for every 67 TB of data read"

                              Are you serious when you reject all this evidence from NetApp, CERN and Amazon, or are you just Trolling?

                              1. Matt Bryant Silver badge
                                FAIL

                                Re: @Matt

                                "I dont understand your excitement of me confirming that ZFS does not cluster?...." Oh, I see - you're not going to deny the problem, just deny it is a problem. If it cannot cluster it cannot be truly redundant, whereas free options for Linux can. Anyone buying or building a home NAS thinking they are getting 100% reliability and data safety/redundancy should think again. Trying to pass off ZFS as the answer to all issues is not going to help these people when their NAS dies and they think "But Kebabfart said ZFS would solve all my problems?"

                                "....You, OTOH, do not...." What, now you're saying ZFS does cluster? That's the difference - I stated a fact you could not deny, whereas you just presented opinion pieces long on stats and blather but a little short on undisputed facts.

                                "....IBM has not licensed DTrace, nor has HP. What does that prove?...." That they don't need Dtrace, just like they don't need ZFS, because they have better options.

                                "....Have you ever seen SILENT corruption? Hint, it is not detectable. Have you seen it?...." Have you ever seen a GHOST? Hint, they are not detectable. Have you seen one? Hey, look - I can make a completely stupid non-argument just like Kebbie's!

                                "....Do you reject all those links that confirm that data corruption is a big problem if you go up in scale?...." NAS box, four disks. Even in my paranoid RAIDed cluster, only eight disks. Scale?

                                "....Are you serious..." Well it is hard to take anything you post with any measure of seriousness. FAIL!

        1. jonathan rowe

          Re: I don't get NAS boxes...

          OK matt, you have 2 disks in RAID 1. One disk says a bit is 0, the other says it is 1 (perhaps flipped by a cosmic ray, power surge, flipped memory bit, or an intermittent disk surface error). Which one is correct? You don't know. That's where ZFS comes in. Read up on ZFS and enlighten yourself.

          1. Matt Bryant Silver badge
            FAIL

            Re: I don't get NAS boxes...

            "OK matt, you have 2 disks in RAID 1....." Well, actually I have two sets of four disks with hardware RAID5 from proper Adaptec cards, and then software mirroring between the two chains of disks, which I couldn't do with ZFS. So far it's been up except for mirror splits for backups and fscks for three years, no bit rot. In fact I have never seen a case of the mythical bit rot you Sunshiners insist is always just waiting to happen, either professionally or at home.

            ".....(perhaps flipped by a cosmic ray, power surge, flipped memory bit, or an intermittent disk surface error)....." What, no hobbyhorse sh*t on the drive surface, surely just as likely?

            "....You don't know....." Oh but I do know male bovine manure when I hear it, and you're so full of it it's coming out your ears!

            ".......Read up on ZFS and enlighten yourself." Instead, why don't you tell me when ZFS is going to get the features like online shrink needed to match better file systems like OCFS2? I suggest it is you that needs to do a shedload more reading about the alternatives instead of just parroting the Sunshine.

            /SP&L

    1. annodomini2

      Re: I don't get NAS boxes...

      Your i3 running full windows with all those drives will draw hundreds of watts, these devices are typically draw 25-50.

      So lets say yours runs 300w, 24/7 all year. That's 2682 Kwh.

      At 50w 24/7 all year. That's 438Kwh.

      Or 6 times the amount of electricity.

      @15p/Kwh

      The nas costs £65.7/year to run vs £402.30 to run the i3.

      1. Michael Habel

        Re: I don't get NAS boxes...

        That's why Op fails...

      2. Back to school
        FAIL

        Re: I don't get NAS boxes...

        "Your i3 running full windows with all those drives will draw hundreds of watts, these devices are typically draw 25-50.

        So lets say yours runs 300w, 24/7 all year. That's 2682 Kwh."

        I have a download box using a sandy bridge dual core celeron G530, DC - DC power supply and a SSD. The system uses 17W from the wall socket idle running XP Pro and peaks at around 40w with 100% CPU load. This is a standard 65W chip with comparable idle consumption to an I3.

        The base power draw of the system is therefore 17W + drives which is the ball park for these NAS units.

        If you need big storage, buying multiple NAS units isn't a great option.

        In my view this are ok as an always on basic device with a couple of drives providing the base unit costs around £100.

        £3-500 for what's basically a simple cpu board and a box for drives is crazy.

      3. K
        Thumb Down

        Re: I don't get NAS boxes...

        Sorry, but you talk bollocks :)

        I built and use an Intel i3 based NAS box, with 5 hot swap drives. The case has a 150w power supply.. but actually draws less than 100w. The drives are Samsung 5.2k rpm ECO drives.. And no the HDD's don't power down, in fact I run VMWare on it at least 4 VM's running at all times..

    2. Michael Habel
      Meh

      Re: I don't get NAS boxes...

      And how much Juice does that thing swallow in a year?

      While your Rig sounds mighty impressive, I for One would prefer something more light-weight in the power consumption category for something that's meant to be up 24/7/365, and the odd Leap Day.

      If I could care less for Global Warming (or just plain warming my Home for that matter). I do tend to side on the side of the Greens here. (Green = Money, i.e. Money saved from having to pay for all that up-time that such a Rig would imply using a 500W+ PSU).

      So no OTOH its a quick and dirty way to get something up and running. But unless your in the 47% this is not really value for Money.

      1. Matt Bryant Silver badge
        Boffin

        Re: I don't get NAS boxes...

        "....I for One would prefer something more light-weight in the power consumption category..." Try using laptop mobos in a DIY NAS, many have an eSATA port or USB ports you can attach drives to. Laptop drives are also lower on power consumption and heat output, laptop fans are usually not that noisy, and parts readily available. And if you don't feel confident about building a DIY rig or configuring Linux you can even just use the laptop as is and configure WinXP to share out Windows volumes if all you have is Windows clients. WinXP has all the networking (and some simple security) required for such a task. You can use a laptop as a NAS with external drives and then in an emergency you have something you can use as a spare desktop should your main desktop/laptop fail. For 90% of households that's all that is really needed.

  1. Mage Silver badge

    Raid0

    What sort of useful test is that?

    I'd only buy one of these if it was good for Raid 5 or 6.

    1. MacGyver
      Coffee/keyboard

      Re: Raid0

      I agree, and as someone that looked EVERYWHERE for a nice 5-hot-swappable-external bay enclosure (with a 6th internal for the OS drive), they don't exist.

      I ended up going with a Chinese case that had 5 tool-less internal bays, and a lower tray that can hold 2 more. Anyone that makes their own NAS knows you need at least RAID5, and if you have ever lost a RAID5 NAS, you know you really should have had a RAID6 array.

      I prefer using software RAID6 (a la Linux mAdam) because it doesn't lock me into an expensive hardware card that has to be replaced by the exact same model in the event of a failure. Given the speed of CPUs (i3) and my demands (at most 4 requests at a time) software-RAID affords me the flexibility of moving my array to any flavor of Linux that supports mAdam, and OSS gives me numerous management and diagnostic tools to build/diag/repair all manor of issues that might pop up (Windows 2003 offered next to zero tools to deal with software RAIDs) . The only thing that could cause me to lose data at this point would be to lose 3 of my 5 drives at the same time.

    2. Anonymous Coward
      Anonymous Coward

      "RAID 5 or 6"

      No, no, and thrice no! Parity = bad.

      http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt

      http://www.infostor.com/index/articles/display/107505/articles/infostor/volume-5/issue-7/features/special-report/raid-revisited-a-technical-look-at-raid-5.html

      http://www.ecs.umass.edu/ece/koren/architecture/Raid/basicRAID.html#RAID%20Level%205

      1. MacGyver
        Holmes

        Re: "RAID 5 or 6"

        A RAID5 array with 4 drives (1TB + 1TB + 1TB + 1TB = 3TB) and I can lose 1 disk before I lose data.

        A RAID6 array with 5 drives (1TB + 1TB + 1TB + 1TB + 1TB = 3TB) and I can lose 2 disks before I lose data.

        A RAID10 with 4 drives (1TB + 1TB + 1TB + 1TB = 1TB) and you can lose all but one before you lose data.

        RAID10 great for speed and redundancy, and bad on storage space.

        RAID5 is ok for speed, bad for redundancy (if you lose 2 at once), and great for space.

        RAID6 is ok for speed, great on redundancy, and ok for space.

        I guess I should have added. "If you are not rich, and don't have infinite space, use RAID6, otherwise use RAID10." I would guess most people buying a sub $500 NAS don't have an infinite budget.

        1. Wilkenism
          Headmaster

          Re: "RAID 5 or 6"

          Some interesting calculations going on there!

          Pretty sure 4 x 1TB drives in RAID10 gives 2TB of usable space, not 1TB :)

          Also RAID6 is better on redundancy than RAID10 as ANY two disks can fail in RAID 6 (due to distributed parity) however in RAID10 it depends:

          Remember that RAID10 is just mirrored arrays (RAID1) inside a striped array (RAID0) if two disks fail from the separate mirrors, no biggie, the array can be rebuilt. If both disks are from the same mirror, you've lost all your data! (How often do two disks fail at the same time for small arrays like this anyway? And how unlucky would you have to be for both of them to be in the same mirrored array?!)

          But for the performance you get over either of the other implementations it might be worth the lost in capacity of both and redundancy of RAID6 (especially if you add further RAID1 levels within the RAID0 array - 3 RAID1 arrays would mean [almost] 3x the performance of a single disk!).

  2. Dwayne
    IT Angle

    iSCSI

    Any feedback which units support iSCSI? Also clarification if shared CIFS/NFS mounts are support would be helpful?

    1. AOD
      Thumb Up

      Re: iSCSI

      I can't speak for any of the other brands but the QNAP software (I have a TS-410) supports iSCSI, CIFS, NFS and a whole bunch more. It supports dynamic disk expansion so adding more/larger disks doesn't mean you lose access to your data while it does its thing.

      As for the whole "why not roll your own" argument, well to be honest, you're paying for the convenience more than anything else. The HP microservers mentioned elsewhere are nice bits of kit, but AFAIK, they don't support hot swapping drives with the stock BIOS (whereas a lot of the NAS units will support hot swapping).

      My TS-410 acts as a focal point for our movies (happily feeding multiple Apple TVs running XBMC), stores our photos (which are backed up to S3 and Crashplan) and also acts as a backup destination for our home machines.

      It also runs Sickbeard with Sabnzbd and wakes up a hibernating XBMC client via WOL to update the shared mysql media library (also on the QNAP) when something new has arrived.

      I spend most of my days solving IT related FUBARs so when I get home, I don't really want to do that all over again. The QNAP is a bit of kit that I can just leave to get on with it knowing that if there is an issue, it will either email me (assuming it can) or I can get some guidance from a helpful user community. The most serious issue I've had with it was when I found it flashing lights on two drives claiming they were degraded/not available (the unit has 4 x 2TB drives running in RAID5). Turned out it was caused by a brief power outage (and the drives were fine after a complete power cycle), following which my next purchase was a UPS to prevent a repeat.

  3. Alan Brown Silver badge

    not enough bays

    Disks are crap - and large ones can be expected to regularly provide corrupted data which their ECC hasn't picked up (statistically it's about 4 sectors on a 2Tb drive if you read from end to end). 4 drives isn't enough for decent raid levels and raid has "issues" compared with more advanced systems such as ZFS (which is designed form the ground up with the assumption that not only do disks fail, their ECC is flakey, so detects and CORRECTS such errors)

    Seriously, with the amount of stuff that people are piling into their media servers, 20Tb isn't that much anymore and for proper resiliance with large drives you need 7 of 'em to ensure good metadata spread.

    These external NASes are far too much, compared with simply shoving 4 or more drives into a low spec PC and installing FreeNAS or similar as the OS.

    1. This post has been deleted by its author

    2. Anonymous Coward
      Anonymous Coward

      Re: not enough bays

      While possibly/probably effective, your solution does not work out of the box and relies largely on self support.

      These NASes can be in use for file storage within 10 minutes of opening the box.

      And for compactness, a NAS is hard to beat, a Synology 413j Slim gives 4 disk RAID in a box "120 X 105 X 142 mm" which would sit comfortably next to the TV in the lounge or on the desk in the study.

      1. Infernoz Bronze badge

        Re: not enough bays

        These off-the-shelf NAS may be pretty and small, but they have tiny disk capacity, poor and noisy cooling, cheap PSUs, and they all use unsafe logging filesystems.

        My MIDI PC box has two hidden filtered 120mm slow quiet fans to keep 5 hard disks cool (total space for 8 dampened disks), has a cool dual core AMD E-350 Mobo with 8GB RAM, and a very over rated PSU, to hosting FreeNAS 8.3; it is quiet, attractive and only uses 50W; all at a big saving on an off-the-shelf NAS, and more capable too. I have upgraded the OS several time since the NAS was built; no rebuild required.

        My next FreeNAS box will have a lot more capacity and possibly a low power i3, given I realise that although the CPU was not stressed at high load, the I/O bandwidth probably is, so I need to go for a more powerful mobo and CPU.

        There is plenty of support and quicker too for FreeNAS, given they have full docs, a forum, and an IRC channel on-line, this easily beats most commercial support e.g. when an OS upgrade messed up remounting my RAID array, I discussed dthe issue via IRC, a fix for the issue was rolled into an update release, and I was up and running again within an hour; IMO better than phone support :)

      2. JEDIDIAH
        Linux

        Re: not enough bays

        Putting an array together is 5 minutes of work. You Google it once and you are set for the next 5 years or however long your setup manages to meet your requirements.

        Just knowing "what buttons to push" on an appliance is going to put you way beyond the skill or comfort level of most people. The shiny happy interface (or lack of one) really isn't the biggest problem here.

        4 disks just isn't enough. Not enough bays to handle redundancy or parity and hot spares and such.

        1. Anonymous Coward
          Anonymous Coward

          PARITY = BAD

          Do not use parity. You will regret it.

          http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt

          http://www.infostor.com/index/articles/display/107505/articles/infostor/volume-5/issue-7/features/special-report/raid-revisited-a-technical-look-at-raid-5.html

          http://www.ecs.umass.edu/ece/koren/architecture/Raid/basicRAID.html#RAID%20Level%205

        2. Alan Edwards

          Re: not enough bays

          > Putting an array together is 5 minutes of work.

          And then the thick end of a day for it to actually build the array :-)

    3. Matt Bryant Silver badge
      Facepalm

      Re: not enough bays

      "....with the amount of stuff that people are piling into their media servers, 20Tb isn't that much anymore...." So stick it on the cloud and let someone else look after it on proper arrays, which make ZFS look like the toy software it is. ZFS can't cluster and offers SFA resilience as it can't even work properly with hardware RAID. Seriously, get a high-speed internet connection and leave the media on iTunes, Amazon, StorageMadeEasy, Microsoft or some other cloud where it will be protected and replicated between massive datacenters, and probably at less cost than buying a four-slot NAS every couple of years and backing it up yourself. 99% of the cruft stored on home NAS units could be stored on the cloud with a little thought and planning, even by as simple a method as emailing it to yourself in Hotmail. If you're feeling paranoid then encrypt it before you store it but you will have to accept the penalty of having to decrypt it before you can use it again.

      1. Paul Crawford Silver badge

        Re: not enough bays

        Seriously, you think that a home/small business internet connection can support access to 20TB of data in the cloud?

      2. Anonymous Coward
        Anonymous Coward

        Re: not enough bays

        You really are either a clueless moron or a piss taking enterprise BOFH:

        1. Yes people do needs lots of space now, especially SMEs; no they won't pay enterprise prices, ever!

        2. The cloud is WAY too expensive and slow. for 20TB data; and the costs and risks will shock you!

        3. The internet is hideously slow even on 80Mbit fibre for this volume of data, and congestion and latency can be horrible compare to a local NAS.

        4. Mailing multiples of your mailbox capacity to yourself in Hotmail; you must be on Class A drugs!

        5. ZFS is pretty much as good as it gets, and free in FreeNAS; I know I use it a lot!

        I won't even discuss the rest, it's completely irrelevant, especially enterprise level stuff like clustering!

        1. Matt Bryant Silver badge
          FAIL

          Re: Re: not enough bays

          "You really are either a clueless moron or a piss taking enterprise BOFH...." Well, abit of the latter really - I work with enterprise kit but have completely different requirements at home. And I do like taking the piss out of morons like you.

          "....1. Yes people do needs lots of space now, especially SMEs; no they won't pay enterprise prices, ever!..." So they don't. They buy stuff like the Microserver mentioned. If their business grows they move up to the SMB ranges from people like hp or Dell.

          ".....2. The cloud is WAY too expensive and slow. for 20TB data; and the costs and risks will shock you!...." It's called storage tiering, it works for individuals as well as big corporations. Stuff of low importance - back it up to writeable DVD; stuff of high importance - stick it on the cloud. Who said anything about 20TB?

          ".....3. The internet is hideously slow even on 80Mbit fibre for this volume of data, and congestion and latency can be horrible compare to a local NAS....." Yes, but do you look at every item on your NAS and require it instantly? Most people I know actually treat their home NAS more as an archive - stuff they have finished with gets shifted off their laptop/desktop to be stored on the NAS. If you need constant access then a home fileserver would probably be a better idea than a NAS.

          ".....4. Mailing multiples of your mailbox capacity to yourself in Hotmail; you must be on Class A drugs!...." Storage tiering - it's an easy way to store important docs, I can send myself encrypted material if I'm worried about MS (or hackers) taking a peek, and I can access them from just about any device with Internet connectivity from anywhere in the World. For example, I keep scans of my passport and other travel docs in an encrypted and compressed file in Hotmail, and it was a lifesaver when my hotel room was burgled in Beiruit. I've been doing it roughly since Hotmail was launched. You can also be naughty and run several Hotmail accounts to spread the load and ensure one hacked account doesn't mean you lose everything, just don't call them something obvious like joefilestore1@hotmail.com, joefilestore2@hotmail.com..... And Hotmail now comes with free online Office for editing if you're really stuck somewhere with nothing but a smartphone. Try a little thinking outside the box before you start shrieking about drug-use.

          "......5. ZFS is pretty much as good as it gets, and free in FreeNAS; I know I use it a lot!..." Ah, I see your rabid and frothing response is not based on any calm and rational thought as much as a Sunshiner desire to defend your Holy ZFS. I can't help it if your love of ZFS makes you blind to better and simpler solutions, and - frankly - I couldn't give a damn if you're too stupid to consider other options. Your loss.

          "......I won't even discuss the rest, it's completely irrelevant, especially enterprise level stuff like clustering!" Really? Why not? Because your product can't do it. I can make two cheapo Linux servers and set up clustering between them. I can do the same with Windows. But you can't do it so you refuse to discuss it. True, the average home user won't think of it, they may actually think that buying a NAS means they have resilience and 100% data availability. I work with enterprise kit so I tend to think the more resilience the better, and seeing as I have access to lots of excess kit whenever we hit the three-year refresh cycle, it's pretty easy for me to implement at home. It's like the saying goes, ask a London cabbie what the best family car is and he won't say a BMW or Ford, for him it's a black cab. For you it's obviously a soapbox kart, but that's your problem.

          /SP&L

      3. Kebabbert

        Re: not enough bays

        "... ZFS can't cluster and offers SFA resilience as it can't even work properly with hardware RAID..."

        Matt, matt. As I tried to explain to you, hardware raid are not safe. I have showed you links on this. And NetApp research says that too, read my post here to see what NetApp says about hardware raid. There are much research on this. Why dont you check up and read what the researchers in comp sci says on this matter, instead of trusting me?

        OTOH, researchers say that ZFS protects against all the errors they tried to provoke, and concluded that ZFS is safe. When they tried to provoke and inject artificial errors in NTFS, EXT, XFS, JFS etc - they all failed their error detection. But ZFS succeeded. There are research papers on this too, they are here (papers numbered 13-18):

        https://en.wikipedia.org/wiki/ZFS#Data_Integrity

        .

        And you talk about the cloud. Well, cloud storage typically use hw-raid which, as we have seen, are unsafe. And the internet connection is not safe too, you need to do a MD5 checksum to see that your copy was transfered correctly. You need to do checksum calculations all the time. Just what ZFS does, but hw-raid does not. Therefore you should trust more on your home server with ECC and ZFS, than a cloud. Here is what cloud people says:

        http://perspectives.mvdirona.com/2012/02/26/ObservationsOnErrorsCorrectionsTrustOfDependentSystems.aspx

        "...Every couple of weeks I get questions along the lines of “should I checksum application files, given that the disk already has error correction?” or “given that TCP/IP has error correction on every communications packet, why do I need to have application level network error detection?” Another frequent question is “non-ECC mother boards are much cheaper -- do we really need ECC on memory?” The answer is always yes. At scale, error detection and correction at lower levels fails to correct or even detect some problems. Software stacks above introduce errors. Hardware introduces more errors. Firmware introduces errors. Errors creep in everywhere and absolutely nobody and nothing can be trusted...."

        .

        Matt, read and learn?

        1. Matt Bryant Silver badge
          FAIL

          Re: Re: not enough bays

          "... As I tried to explain to you, hardware raid are not safe..." Usual Kebabfart - lots of blather, lots of evasion, no answers to the point raised. Come on, just admit it, you can't cluster ZFS, it introduces a big SPOF into any design. For hobby NAS, provided you can afford to pay for rediculous amounts fo RAM and CPU, it might be passable, but there are far better solutions that can work with lots less hardware AND can be clustered if required.

          Someone forgot to tell you, Sun is dead. Stop trying to flog a dead horse, they won't give you anymore paid-for blogging awards.

    4. petur
      Stop

      Re: not enough bays

      What makes you think there are only 4-bay models?

      A quick look at the QNAP site shows they have models with 1,2,4,5,6,8,10 and 12 bays. The latter ones running with beefier intel cpu, not atoms.

      And as for building one yourself: sure, why not, just like you can build your PC yourself. Some people prefer that, others go for a pre-build model. Another advantage of these NAS boxes is they are *very* compact, and most certainly use lower power than anything you build yourself. And no worry about hardware compatibility, the OS that comes with them supports its hardware, something that isn't automatically so for build-your-own boxes.

      1. JEDIDIAH
        Linux

        Re: not enough bays

        Intel parts aren't nearly as power hungry as they used to be. Power management is a lot better across the board. So there are fewer and fewer reasons to shell out the cash for an appliance.

        ...and while there are more "robust" appliances, those are even more rediculously overpriced than the small ones that are the subject at hand.

  4. Woodnag

    ZFS

    As various posters have mentioned, ZFS (available as part of FreeBSD) is the only reliable FS available for free. Trouble is, Sun hasn't open sourced the version with native encryption yet, and alternatives (GELI) are frankly a PITA.

    Honestly, setting up and using FreeNAS on some old machine with ZFS is dead easy, gives plenty of early warning when a drive is going dubious. However, you do need 8GB of RAM as practical minimum.

    1. Ramazan
      Alert

      Re: native crypto

      You should never use FS level crypto - opt for PV level one instead (only /boot is open, everything else including swap partition is encrypted with passphrase no shorter than 24 characters).

  5. Andy E
    WTF?

    Thecus? Recommended? Misguided Fool!

    I was interested in the artical up untill I got to the bit where Thecus was recommended. I own a Thecus N2200Plus box and it is utterly crap. Most of the features don't work, the support from Thecus is appaling. The support forums are littered with peoples distrss stories and I have personal experience of loosing data when the Raid array just stopped working for no apperent reason.

  6. Peter Galbavy

    ReadyNAS v2 for 280? Really

    Just bought one from Amazon to replace my original Infant NV+ for £145 empty...

  7. petur
    FAIL

    Weird NAS selection

    Certainly on the QNAP part, as both models are in fact LOW-end models, not high-end as the review says... If they had taken a TS459 or even TS-469 it would have blown the competition away (my TS-269 saturates gigabit (100MB/s) and needs dual lan + beefier switch to deploy its full potential).

    Given the selection of models, it is easy for the reviewer to steer the outcome of the article.

    1. Mark 65

      Re: Weird NAS selection

      It does seem strange that the choice of QNAP appliance wasn't the same level in the range as the Synology one. They are generally more expensive though so maybe that had a bearing but I agree that the 459 would have been a better choice and achieves over 100MB/s writes. I used to have the tower system running linux but moved to a QNAP as it can sit in the lounge and is small and quiet in operation. I like the appliance nature. My decision may have been different if the HP server people have was available then.

  8. Paul Crawford Silver badge

    Data integrity?

    One critical issue in my view is data integrity. That is what a NAS it supposed to do, store data reliably. But the article fails to address that. Do they support internal file systems that have data checksums (like ZFS)?

    If not (and important even with ZFS) do they support automatic RAID scrubbing where periodically all of the HDDs are read and checked for errors in the background.

    Most folk at home will only have 1 HDD of protection (RAID-1 or RAID-5) and what happens later in life is a HDD fails, you replace it and find bad sectors on the other disk(s), thus corrupting the valuable data. With two HDD of protection (e.g RAID-6 or ZFS' RAID-Z2) you can cope with one error per stripe of data while rebuilding, but that is not always enough.

    That is why you want to check once per fortnight/month that the HDD are all clean, and so so allow the HDD to internally correct/re-map sectors that had high error rates when read, and if necessary to re-write and uncorrectable ones from the RAID array if that fails.

    Of course, sudden HDD failure happens, maybe even multiple HDDs, or PSUs, as does "gross administrative error", which is why you should all repeat "RAID is not a backup" twice after breakfast...

    1. petur

      Re: Data integrity?

      I think most NAS models support disk checks. My QNAP monitors SMART and can be scheduled to do quick but also extensive disk tests looking for bad blocks.

      Sadly no ZFS (yeT)

      1. Paul Crawford Silver badge

        @petur

        The problem with simply monitoring the SMART status is it won't know about bad sectors until you try to read them. Often by then it is too late.

        Smart has support for a surface scan, and while that allows marginal ones to be re-written, it just report any uncorrectable/re-mappable sectors as bad and you won't generally know about that until a HDD fails and you need to re-build the array.

        Hence the advantage of the RAID scrub process:

        1) It accesses all of the HDD sectors (or all in-use ones in the case of ZFS), forcing the HDD to read and maybe correct/re-map any that are marginal, just as the SMART surface scan will do.

        2) For any that are bad, it, by virtue of being in a RAID system, can then re-write any bad sectors with the data from the other HDD(s) and that will normally 'fix' the bad sector (as the HDD will internally re-map a bad one on write, and you still see it as good due to the joys of logical addressing).

        Recent Linux distros like Ubuntu will do a RAID scrub first Sunday of the month if you use the software RAID, which is good. But I don't know of any cheap NAS that pay similar attention to data integrity.

        Not counting RAID-0, OK?

      2. Mark 65

        Re: Data integrity?

        Not sure if the QNAPs will ever get ZFS as I believe its memory requirements for good operation exceed what most boxes will have - I believe 1GB per TB of storage is recommended with typically 8GB min. for good performance. My TS-439 has 1GB as do most others.

  9. jonathan rowe
    Thumb Up

    microserver N40L

    Don't waste your time with any of these, an N40L with 8gb of ECC RAM and an intel NIC (N40L built-in does not do jumbo frames) will wipe the floor performance wise. It has an internal USB slot onto which you install FreeNAS and then you get ZFS.

    ZFS + RAIDZ2 + ECC memory - don't trust your precious data to anything less.

  10. Kevin McMurtrie Silver badge
    Thumb Down

    For when the world isn't perfect

    I use NAS for backups so I like to see some protection against the usual problems.

    What happens when a power failure interrupts writes? What happens when the NAS is in redundant mode and a disk fails? Does it send an e-mail, blink an LED that will never be seen, or pretend like nothing is wrong? What happens when a failed drive is replaced? Can bundled drives be replaced under warranty without long downtime? There are plenty of NAS out there that claim RAID 5 protection but are unusable for days when something goes wrong. I recall and old D-Link and a more recent LaCie 5big that needed to be wiped clean and shipped for warranty drive replacement. Even if they had simply sent me a new drive, they would have needed days to rebuild too. I don't like being without backups for days/weeks so I end up buying a different brand of NAS and giving away the old one when it comes back. What a waste of money.

    1. Mark 65

      Re: For when the world isn't perfect

      QNAPs will email alerts, same goes for Synology I would imagine. As for power interruptions, if you worry about your data enough to be using a RAID equipped NAS then I suggest you spring for an APC UPS that can send notifications via its USB connector that the NAS will act upon (configurable in the GUI). I used to have a UPS on my PC before I bought the NAS to guard against power failures as it seemed only sensible. Array rebuild time will be a function of the processor as it's doing a fair amount of work. 2TB disk replacement caused a rebuild taking hours on a QNAP rather than days. It will also real-time sync to an external backup, send data to Amazon S3, Elephant drive or sync to another remote NAS. Both companies have built-in SSH amongst other things on their appliances.

    2. Mark 65

      Re: For when the world isn't perfect

      FYI - smallnetbuilder.com is the site to checkout on these matters.

  11. Sean Timarco Baggaley
    WTF?

    @ZFS Fanboys:

    Yes, we get it, ZFS does some neat stuff. Guess what? Most people (myself included) find it easier to just run a regular (in my case, weekly) backup of important data to an external drive connected to the NAS box via USB. (I also make a weekly clone of my computer's drive on the same day. Job done.

    As for why I bought a ready-build NAS appliance: I did so for the same reason I prefer to live in ready-built homes. My time is worth money. I'm worth £300 day as a technical author. (And that's cheap. Some charge as much as £700 / day.) I'm not a fan of UNIX in any of its flavours, so setting up even a FreeNAS box isn't something I enjoy. I'd spend hours perusing the Web to find out the best practices, the arcane spells that need to be typed into the shell, and so on. On top of which, I'd also have to order all the parts and wait for them to be delivered.

    Why the hell would I waste £600 or more of my time (and days of my life) working on a device I can just buy off the shelf for less than half that, and which would be up and running within minutes of my taking it out of the packaging?

    Just because YOU enjoy a bit of DIY in your preferred field of expertise, it does not follow that everyone else does too. My background is in software, not hardware. I know how the latter works, and I've built dozens of PCs over the years – mostly for relatives and friends – but it is not something I find particularly rewarding.

    I have no more interest in building my own NAS boxes and laptops than I do in building my own home or car. The time required for the DIY approach is not 'free' unless you actually enjoy doing that sort of thing as a hobby. I don't, so, as far as I'm concerned, it's time wasted on doing something boring and irritating instead of time I could be earning doing something fun and rewarding.

    1. jonathan rowe

      Re: @ZFS Fanboys:

      Sean, believe me if any of these NAS boxes used ZFS (or BTRFS or the new windows FS) I would buy one at the drop of a hat, but my data is just too important to put to chance. I am glad you backup, but if the data on the disk goes bad, then so do all your backups - the problem is that you don't know that your data is corrupt until it is too late and all your backups have been 'polluted' with bad data.

      You can do a freenas setup in about half a day - there are no arcane spells involved at all, a modest investment compared to the immeasurable expense of losing important data or worst still, not knowing that you have lost important data when your NAS box says 'yep, all hunky dory'.

This topic is closed for new posts.

Other stories you might like