back to article Ten four-bay NAS boxes

The storage needs of home users are ever growing, such that the capacity of dual-layer DVDs appears miniscule, and backing up CD looks desperate. Many are now turning to network accessible storage systems that not only allow data storage in the home, but also provide HTTP, FTP and cloud services for when you’re on the go. The …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    What I'd be looking for in such a thing

    Quiet operation

    Properly managed cooling of the disks

    Disk vibration dampening

    Caddy-free disk mounting

    Hardware crypto acceleration for full-disk encryption

    Gigabit ethernet -- getting more or less standard, not quite there yet

    Own OS support -- I'll be running a *BSD, sporting NFS, maybe AFS

    And optionally feeding it a mere 12V

    Somehow all I'm getting is "web interface"

    Back to building things by hand then. *sigh*

    1. Prof Denzil Dexter

      Re: What I'd be looking for in such a thing

      Would be interested to know how noisy the Netgear is. i have the the Duo, it does the job but the stock fan was really loud. its not a rackmount so lets not assume its going in a DC!

      Also, the UI was insanely slow, even after upgrading the ram from 256 to a gig.

      any chance of screenshot of the UI's?

      think my next move might involve Freenas

      1. HandleOfGod

        Re: What I'd be looking for in such a thing

        I have a couple of Netgear NV+ (v1) and find the interface to be quick and easy to use. Netgear are pretty good with firmware updates and both boxes have been updated to the latest but one version - maybe this assists with the interface? I do have a two stacks of Netgear L2+ smart switches though and the interfaces of are pretty slow.

        No real complaints with the NAS boxes though. They used to be used for regular backups and periodically they'd eat disks but a while back they were replaced with much more business class kit and nowadays they mostly just sit there with occasional use for archiving. But even so they still eat discs. But it's usually only the two center bays - and it's only Seagate. Due to the long warranty periods on the discs we were returning them to Seagate, receiving back recon drives which went back into storage and then back into the NAS when the next one failed. But the cycle was continuous with 2-3 failures per year so we started using the Seagates for other things and bought others (initially Samsung and then later WD when Samsung vanished) and the problem stopped.

        The outer two bays, which run about 10C cooler, are much less of a problem - I just had to replace a failing Seagate in Bay 4 this week but, being the older of the two NV+ units that drive has been in there for 5 years which is a failure time I can live with.

      2. Chris_Maresca

        Re: What I'd be looking for in such a thing

        I have an NV+, it's pretty quiet except when it wakes up and the fan goes full bore.

        Had a Thecus, it was AWEFUL. The UI sometimes wouldn't refresh properly, it was slow, very loud, prone to burnt out power supplies and the fan was not adequate enough to cool the drives... It did run Linux on x86, which was nice, but there is very little in the way of community, which is important if you have a problem.

        If I had to do it all over again, I'd get a Synology hands down. The NetGear ReadyNAS is nice, but Synology has a much better community & ecosystem.

        1. Dick Head

          Re: What I'd be looking for in such a thing

          Yes the Synology is pretty good. I've had the 2-bay version for about a year and the interface and basic functions have got steadily better. Every time is log in (monthly or so) updates are available.

          Unfortunately they still haven't fixed automatic backup of larger SD cards, but the 4-bay version doesn't seem to have an SD slot, judging from the photo. Also the wireless option has never worked for me, even when a supposedly supported wireless USB adapter is plugged in.

          The support community is both active and helpful.

    2. Paul Webb

      Re: What I'd be looking for in such a thing

      I agree. A good postscript to this article would be similarly brief round-up of build-your-own options using freeNAS (and the rest). I'm happy with my HP ProLiant Microserver which I know many people have on here and cost a less than £150( (with cashback). No doubt the good burghers of this parish will be along soon enough with suggestions for the latest and greatest.

      1. Stoneshop

        Re: What I'd be looking for in such a thing

        I too was looking for a NAS server thingie, with the main requirements being rackmount (those do exist) and less than 2.5 linguini (35cm, 14") deep (those don't, as far as I've been able to find).

        A brief flirt with Travla in the hope of obtaining, at a reasonable price and in a reasonable timeframe, a case that kindof met my requirements failed as the price and timeframe turned out to be anything but reasonable.

        So I decided to roll my own. The result: built into a 19" 2HE audio case (those are easily available in the depth I required is a 4x 2.5" drive bay, a mini-ITX board, a Flex-ATX PSU and two temperature-controlled fans. The air "inlet" is past the drives, so they get more than sufficient cooling. The board is mounted with its I/O panel towards the front. Freestanding it's not particularly quiet due to the 6cm fans, but mounted and with the rack door closed you barely notice it. For OS I briefly considered FreeNAS, but went with my favorite distro instead.

      2. R Varia

        Re: What I'd be looking for in such a thing

        Another vote for the micro server. A fraction of the price of the top end NAS boxes and solid performance. I have supplied a few as NAS / media playback device. Add a couple of GB of RAM, a £30 video card, your choice of O/S and they will happily playback HD video.

        1. Alan Edwards

          And another

          Another vote for the MicroServer. I've got two of them, a N36L running VMWare ESXi and an N40L with FreeNAS 7 and 4x2Tb drives.

          I'd be interested in seeing a comparison of a dedicated box against a MicroServer/FreeNAS combi, or if there is better software than FreeNAS.

      3. Nuno Silva
        Thumb Up

        Re: What I'd be looking for in such a thing

        +1 for the HP Microserver. Cheaper, faster and you can install regular Linux or FreeBSD. I really can't understand why someone would buy one of these overpriced and limited "NAS" devices... Ohh well...

      4. Blacklight

        Re: What I'd be looking for in such a thing

        Indeed! An N40L can host 5 drives internally (if you use the 5.25" bay intended for optical drives). I've got OpenMediaVault on mine and it does just fine for me :)

  2. ElsieEffsee

    HP Proliant Micrpserver

    At a fraction of a cost of these NAS boxes and with more flexibility, I'm glad I've opted for 2 of these instead of a NAS.

    1. firefly
      Thumb Up

      Re: HP Proliant Micrpserver

      Totally agree, the price of these NAS devices is ludicrous considering a Microserver is £120 after cashback, comes with 2GB of RAM, a 256GB drive and supports ECC. The 5.25" bay is also useful if you want to install an optical or tape drive or even another 2 HDs for a total of 6.

      Granted they don't work out of the box, so you're going to have to fill it with drives and install and configure your favourite OS, but I don't think that's beyond most Reg readers.

    2. johnnymotel
      Thumb Up

      Re: HP Proliant Micrpserver

      I am constantly surprised that this box from HP gets so little recognition. I guess it's got to be a larger box than any of these, but it's sheer capability and low price get's my vote.

      The only thing that does dissuade me is setting one up with a Linux distro. If there was an idiots guide on the web somewhere, I would get one.

    3. gazzton

      Re: HP Proliant Micrpserver

      There's a £100 cashback offer from HP on the ProLiant Microserver N40L during November, so, total cost about £120 all in from a number of retailers, for base configuration (2GB).

      1. AbortRetryFail
        Thumb Up

        Re: HP Proliant Micrpserver

        Yup. You can't argue at that price. Buy it for £120, stick in four drives, install FreeNas onto a memory stick (sited in the internal USB port) and away you go. I have the older N36L and have just bought the new N40L for another project.

        One thing I would advise is to to stick some more memory in though as the ZFS file system is quite memory-hungry if you go for a RAID-Z configuration.

    4. bunual

      Re: HP Proliant Micrpserver

      Whilst it may be cheaper, the lower power usage and ease of setup lead me to choose a QNAP system. If you're happy to build your own then you can save money but I liked the idea of a box in which all I had to do was add hard drives and I was away. Mine has lasted for several years and done a seamless upgrade from 1TB to 2TB disks with no downtime. At the end of the day, it's done what I wanted at a price I feel was worth it.

      1. JEDIDIAH
        Linux

        Re: HP Proliant Micrpserver

        With what I save on not using overpriced appliances, I can have a completely redundant array.

        That nullfies much of the typical marketing bullet points associated with NAS appliances.

        Plus you've got a whole other copy of everything.

      2. Alan Edwards
        Thumb Up

        Re: HP Proliant Micrpserver

        It might not be that much lower on power than a NAS box, my N36L pulled about 35w when I tested it.

        Upgrading the drives will be an issue - UnRAID claims to be able to (I've not used it personally), FreeNAS can't.

        I found FreeNAS pretty simple to set up, but I played with it in a VM first to get it right. It's been running nigh-on 24x7 for over a year now.

  3. KroSha

    I have a ReadyNAS Duo. I'm really happy with it. It's quieter now that I've replaced the stock fan, although it wasn't that bad in the first place. Upgrading the RAM made quite a difference, but was a PITA. Back when I got it, NAS support for Apple Time Machine was few and far between and this was a major factor in the purchase. I'm rather surprised this article went for a NV+ rather than an Ultra4.

    1. Paw Bokenfohr
      FAIL

      Agreed. They should at least have reviewed both, and it's also odd...

      ...that they mentioned the on-the-fly expansion capabilities of the Synology, but didn't mention that the Netgear box has that same functionality, and indeed, Netgear were the first to offer it and have done for years and years since the acquisition of Infrant who created the ReadyNAS line in the first place.

      1. Tom 35

        Re: Agreed. They should at least have reviewed both, and it's also odd...

        Yes they just mention in passing that the Synology can be expanded easily, and then don't say anything for any of the others. I would think that was a major feature that should be covered for all of them. Don't care how much ram they have, I want to know how they work!

  4. Anonymous Coward
    Anonymous Coward

    Absurdly Expensive

    For about a £1k I managed to pick up a HP MSA storage works encloser 12x 2TB SATA drives, a interface card and supermicro server, so whats the real incentive of using these especially when the data transfer rates are nowhere near any home built nas apart from looking nice and pretty.

    1. firefly
      Facepalm

      Re: Absurdly Expensive

      First of all I call bullshit on your figures, 12x2TB drives would cost at least £800 alone, an MSA shelf would be another £800 and that's before you've bought your HBA and server. Secondly the NAS devices reviewed here are designed for homes and small offices and are designed to be quiet and consume little power. Your disk shelf, drives and server would consume around 300w and would make a hell of a racket. And thirdly, these devices are designed for simple storage where transfer rates, especially writes, are relatively inconsequential.

      Apples to oranges doesn't really begin to describe it.

      1. Matt Bryant Silver badge
        Boffin

        Re: Absurdly Expensive

        "First of all I call bullshit on your figures...." He never said what generation of MSA or if it was new. There is a lot of older used models on eBay going dirt cheap (http://www.ebay.co.uk/sch/i.html?_trksid=p5197.m570.l1311&_nkw=hp+msa&_sacat=58058&_from=R40). A friend has an hp MSA1000 with clustered RHEL fileservers he built for self-training for his RHCE, all housed in a half-height rack in his loft (so no problem with noise). He bought the lot (including the two old Dell servers, a 100Mb LAN switch and FC adapters) for £400 on eBay. I can't remember what size disks he got with it but the MSA was full when he bought it. His latest project is trying to get it to replicate over VPN to another MSA at his office.

        1. Danny 14

          Re: Absurdly Expensive

          I too call bullshit. Cheap older MSA wont support 2TB drives for a start. Plus then you might need to find a SCSI controller and a PC to serve as NAS.

          An MSA 1000 is hardly going to sit in the living room under the telly. Those things weight a ton too and connect via fibre channel. I dont see many laptops sporting fibre channel connections.

          The cheapest way would be a smallest case with two 5.25 - put a 4 bay hotswap in there and use an integrated low power board - ion or amd fusion (can you get ECC integrated boards?) use freenas/openfiler (openfiler will also let you install the OS to the RAID array too) etc and you should have a 4tb raid10 for under £400

    2. Stoneshop
      FAIL

      Re: Absurdly Expensive

      so whats the real incentive of using these [NAS boxes]

      - Size

      - Noise

      - Power consumption

  5. Unlimited
    Thumb Down

    performance is not the only metric

    file systems supported?

    idle power consumption?

    ssh?

    1. Danny 14

      Re: performance is not the only metric

      two gigabit connections for you herp derp samba :-)

  6. Gareth Perch
    Stop

    I don't get NAS boxes...

    ...unless it *really* needs to be that small - but when you need more bays, you have to go out and buy another one.

    Very little room for expandability - I've got 12x (mostly 2TB and all Samsung) HD, 1x optical and 1x OCZ SSD in my tower case. Three are in a hot swap bay that takes up two of the three optical drive bays. There are big, slow, quiet fans in front of all the quiet, low heat, low power hard drives.

    Why not build a Win 7 / Win 8 PC with shared libraries, running full Windows software for downloading / web sharing / managing your files, that's upgradable by something much more flexible than firmware and that won't leave you unsupported in a year or two, requiring you to buy the new version? That's what I did:

    http://www.avforums.com/forums/networking-nas/1085092-mulvaneys-14-5tb-win-7-media-server.html

    I've updated it since then though, it's about 20TB now and I'm about to put in a Core i3, more RAM and Win 8. I'm lucky enough to be in a position to upgrade cheaply and often, due to buying / selling / repairing PCs in my spare time (I manage a 1500 user, 500 PC, 200 laptop network for my main job).

    I'm still waiting for 3TB drives to come down to £70 and 4TB to £100. Then I might wait until 4TB are £70 - I've got plenty of space for now.

    1. CADmonkey
      Headmaster

      Re: I don't get NAS boxes...

      I do...

      I bought a Synology DS-413J for £275. It arrived, I stuffed 4 disks in and turned it on (hybrid RAID FTW!). A fair bit of whirring and clicking later, it's all working. Job done.

      It sits quietly in the corner and acts as a printer server and media server for all the devices in the house. It runs on 30W.

      My gas-guzzling PC with the 6 fans and 1000W PSU doesn't need to be on 24/7.

      Easy to like.

    2. Kebabbert

      Re: I don't get NAS boxes...

      "...Why not build a Win 7 / Win 8 PC with shared libraries,..."

      Why use Windows? It is not safe and susceptible to data corruption. Ever hard disk gets lots of read/write errors during normal usage, but gets corrected on the fly with error correcting codes on the disk. However, some of these errors are not correctable by the codes. Even worse, some of these errors are not even DETECTABLE by the codes. The codes are not fool proof, you know. They might be able to correct 1 bit errors, and detect two bit errors, but sometimes three bit errors happen. Very rare, but they happen. And those three bit errors are not correctable. Sometimes four and even five bit errors happen, and such errors might not even be detectable. This is called Bit Rot, google on it. Old VHS tapes doesnt work today, the data has begun to rot.

      It is exactly the same problem in RAM. Why do you think servers need ECC RAM? ECC RAM can correct 1 bit errors, and detect 2 bit errors. Microsoft concluded (after collecting information about Windows crashes) that 30% of all Windows crashes was caused by random bit flips in RAM, something that ECC RAM would have protected against. Cosmic radiation is a big source of random bit flips, flaky power, sun bursts, etc. There are much research on data corruption.

      You need a solution that always calculates checksum data of every read block on the disk, in effect doing a MD5 checksum on every read. Or SHA1, or any other checksum. This is the only way to protect against bit rot on disks. Incidentally, ZFS is designed to protect against bit rot, and does exactly that: for every block you read, it calculates a SHA256 checksum. And if the checksum is not correct, it automatically repairs the block from the raid. And, ZFS is free, as in gratis. If you try ZFS, you must not use hardware raid, because they mess up the checksum calculation. Sell it, and use free ZFS. Here is much research on data corruption on disks. And of course, if you are serious with your data, you should use ECC RAM too.

      NTFS, ext, HFS, XFS, JFS, etc is not safe, and might corrupt your data (research paper) here. And ZFS is safe, according to researchers.

      http://en.wikipedia.org/wiki/ZFS#Data_Integrity

      1. K

        Re: I don't get NAS boxes...

        Can't fault you on your logic - but your wrong.

        I've built many bespoke NAS boxes over the past few years for home usage - most of them run Windows, I can't recall 1 of them having a problem.

        The issue with using a *NIX OS for the NAS, few "home" user comprehend the bare OS, so if they have a problem, its next to impossible for them to reliably fix it, and even if they Google for a solution, its gibberish to them. Windows on the other hand is prevalent, 99% of users are familiar with it and I'd say 75% are capable of finding a solution to most problems (of course they try to hit up their "IT" friend to fix it first!).

        1. K
          Facepalm

          Re: I don't get NAS boxes...

          75% being on the very optimistic side of course..

        2. Michael Habel
          Linux

          Re: I don't get NAS boxes...

          Sadly what this Man says is true.

          I'm getting better at Linux though and will likely at some stage fully merge with it.

        3. JEDIDIAH
          Linux

          Re: I don't get NAS boxes...

          A n00b is going to have a problem with fixing any NAS. This goes for appliances as well as Windows boxes. There is simply no magic in Windows (or even MacOS) that hides the complexity of this stuff. "Normal people" just have trouble with the idea that they can create a shared drive under Window and use it on another machine.

          Never mind anything that's really interesting.

      2. Anonymous Coward
        Anonymous Coward

        Re: I don't get NAS boxes...

        "Why use Windows? It is not safe and susceptible to data corruption"

        What a load of FUD.... dont forget to wrap some foil around your hard drives

        1. Steve I
          Unhappy

          Re: I don't get NAS boxes...

          "dont forget to wrap some foil around your hard drives"

          but I only bought enough foil to make a hat...

      3. Matt Bryant Silver badge
        Facepalm

        Re: Re: I don't get NAS boxes...

        <Yawn> Another session of Kebabbert's "ZFS is the answer, no need to tell me your question!" I can get built-in RAID on most PC mobos, it is reliable and cost-effective, does not impact on CPU performance, why bother with the hassle of ZFS which steals cycles from the CPU?

        1. Paul Crawford Silver badge

          @Matt

          " I can get built-in RAID on most PC mobos" - that is almost certaily 'fake RAID' where the BIOS can boot from it but it is the OS that has to actually do the RAID computation. OK for simple RAID-1 or similar its easier than ZFS, but it still lacks the advantages of data checksums.

          ZFS is not the only file system that does that, GPFS has them as well, but most others I think only do metadata checksumming (e.g. Linux ext4, and MS' new and unproven RsFS unless you explicitly ask for the extra checks/load).

          I can't believe you have not ever had that horrible feeling when you get a/multiple disk errors and no simply way to find out *what* has been corrupted by the failure of "sector 102345569" etc. Also I am not the only one I know to have had data corruption in a file system due to bus/memory errors that were 'silent' so it was only on decompressing a ZIP archive (which has integrity checks in it) that it was discovered. Most other files have no checksums so the true extent of the damage was not known and the tedium of complete backup restoration had to be undertaken.

          We all know you have an irrational dislike of all things Sun, but from an integrity point of view ZFS is one of the best choices for file systems, unless you are playing big-league with IBM's distributed system.

          1. Matt Bryant Silver badge

            Re: @Matt

            ".....'fake RAID' where the BIOS can boot from it but it is the OS that has to actually do the RAID....." If it does the job, and with a lot less cycles than ZFS, what is the problem? More mythical "bit rot"?

            ".......We all know you have an irrational dislike of all things Sun, but from an integrity point of view ZFS is one of the best choices for file systems....." So it's just me is it? So if ZFS is just so wonderful, why hasn't Oracle dropped BTRFS? Why does Oracle Linux use OCFS2? It looks like Oracle isn't too keen on ZFS either. And why has no other UNIX vendor dropped Veritas for ZFS? Because it's not up to the job.

            ZFS is just a ripoff of WAFL and has the same performance issues. It will slow as the file system fills up. ZFS can't cluster which makes it a poor choice for real servers and hogs too much CPU which makes it a poor choice for low-power NAS solutions. It also has problems with hardware RAID. OCFS2 and BTRFS don't have those issues so I would rather recommend them than leftovers from the Sun carcass.

            /SP&L

            1. Paul Crawford Silver badge

              Re: @Matt

              "If it does the job, and with a lot less cycles than ZFS, what is the problem?"

              The problem is no integrity checking, same issue for Linux software RAID, etc. My data is valuable, so I want to know if it is uncorrupted, and this is something I have seen before.

              "Why does Oracle Linux use OCFS2?"

              Because ZFS' license is not compatible with the Linux kernel's GPL one, resulting in it generally being relegated to user-space where performance sucks (same for all other fuse systems). This is a legal issue, not a technical one.

              "ZFS is just a ripoff of WAFL"

              Hmm, I think the NetApp versus Sun/Oracle case was closed on that one after several of the patents were struck down. Odd you see that as a problem, as NetApp's customers like things like snapshots and copy-on-write. OK, they don't like the usurious license fees NetApp like to charge to actually *use* such features, but that is a separate issue.

              "It also has problems with hardware RAID"

              Not really, but if you use hardware RAID, or a separate software RAID layer to present the storage to ZFS, you then lose the key advantage of error detection and recovery of 'silent' HDD/bus/memory errors that most dumb RAID systems miss. It will at leat tell you the file(s) are corrupt, but too late to do anything by then.

              I have wondered why you have such a problem with anything Sun-related, as your other posts on DB stuff are clear and rational. So why are you not so caring about data integrity in a storage system? What do you uses/recommend to verify data is exactly the same as when written?

              1. JEDIDIAH
                Linux

                Re: @Matt

                >> "Why does Oracle Linux use OCFS2?"

                >

                > Because ZFS' license is not compatible with the Linux kernel's GPL one,

                Huh? Oracle OWNS ZFS. They own it along with everything else that was part of Sun Microsystems.

                If they really wanted to use it then licensing it would not be a problem. They own it. They aren't some random 3rd party.

                The fact that ZFS is not a clustered filesystem is likely why Oracle uses OCFS2 instead.

                It's kind of like comparing apples and potatoes.

              2. Matt Bryant Silver badge
                FAIL

                Re: @Matt

                ".....The problem is no integrity checking...." Great, so ZFS tells you after you have a problem, and without the ability to use hardware RAID5 underneath to get round the issue. I run fsck via cron which is all scrub is, and so far I've not found any hobbyhorse sh*t on my drives. Maybe it only happens with Sun kit, if you pray to the Great Ponytail hard enough....?

                "....Because ZFS' license is not compatible with the Linux kernel's GPL one...." So you're suggesting use a deadend OS like Slowaris x86 then? So, ZFS has a dodgy licence, which Oracle could shaft you with at any point, no development roadmap that Oracle is going to guarantee sticking with, and you want to run it on an Oracle-controlled OS with even worse prospects? Good luck persuading the Penguinistas with that one! Of course, the fact that OCFS2 was designed from the ground up as a resilient, clusterable filesystem that works with Linux couldn't possibly have anything to do with Oracle's decision, right? ROFL! BTW, why exactly was it Apple took one look at ZFS and said "nyet"? I used to use FreeNAS but stopped when they included ZFS, as did many other people I know. Others dropped FreeNAS when version 8 started needing rediculous amounts of RAM to perform.

                "....Hmm, I think the NetApp versus Sun/Oracle case was closed on that one after several of the patents were struck down....." Nice try. Sun's own coders admitted they based their design on WAFL, and it has exactly the same space issues, patents or not. That's when it's not crashing and corrupting data all by itself.

                "....I have wondered why you have such a problem with anything Sun-related...." Years of suffering the Sunshine makes me less than likely to swallow marketing bumph dressed up as technical knowledge, thanks. There's a reason Sun died and it was because customers got Sunburnt and stopped believing them.

                /SP&L

                1. Kebabbert

                  Re: @Matt

                  Anonymous Coward:

                  >"Why use Windows? It is not safe and susceptible to data corruption"

                  >What a load of FUD.... dont forget to wrap some foil around your hard drives

                  You are right on requesting links, otherwise it would be pure FUD: make lots of strange negative claims without ever backing up with credible links. Here you have a PhD thesis where the research conclude that NTFS is not safe, with respect to data corruption. I suggest you catch up with the latest research if you want to learn more:

                  http://pages.cs.wisc.edu/~vijayan/vijayan-thesis.pdf

                  http://www.zdnet.com/blog/storage/how-microsoft-puts-your-data-at-risk/169

                  Dr. Prabhakaran found that ALL the file systems (NTFS, ext, JFS, XFS, ReiserFS, etc) shared:

                  "... ad hoc failure handling and a great deal of illogical inconsistency in failure policy ... such inconsistency leads to substantially different detection and recovery strategies under similar fault scenarios, resulting in unpredictable and often undesirable fault-handling strategies. ... We observe little tolerance to transient failures; .... none of the file systems can recover from partial disk failures, due to a lack of in-disk redundancy."

                  .

                  .

                  .

                  Matt Bryant:

                  >"...I can get built-in RAID on most PC mobos, it is reliable and cost-effective, does not impact on CPU >performance, why bother with the hassle of ZFS which steals cycles from the CPU?..."

                  First of all, hardware raid are not safe with respect to data corruption. Here are some information if you want to learn about limitations of hardware raid:

                  https://en.wikipedia.org/wiki/RAID#Problems_with_RAID

                  Second, it is true that ZFS uses cpu cycles, a few percent of one core. There is a reason ZFS uses cpu cycles: ZFS does checksum calculations of every block. If you ever have done a MD5 checksum of a file to check data integrity, you know it uses cpu cycles. ZFS does that (SHA256 checksum actually). Hardware raid does not do any checksum calculations for data integrity, instead, hw-raid does PARITY calculations which are not the same thing. Parity calculations are just some XOR easy calculations and hw-raid are not designed to catch bit rot.

                  Have you ever experienced bit rot of old 5.25" or 3.5" data discs? Old disks doesnt work anymore. This problem also applies to ECC RAM, with time, a powered on server will have more and more random bit flips in RAM, to the point it crashes. That is the reason ECC RAM is needed. Do you dispute the need for ECC RAM? Do you think servers dont need ECC?

                  .

                  >"...I run fsck via cron which is all scrub is, and so far I've not found any hobbyhorse sh*t on my drives..."

                  Matt, matt. ZFS scrub is not a fsck. First of all, fsck does only check the metadata, such as the log. fsck never checks the actual data, which means the data might still be corrupted after a successful fsck. One guy did fsck on a XFS raid in like one minute, think a while and you will understand it is fishy. How can you check 6 TB worth of data in one minute? That means the fsck read the data at a rate of 100 GB/sec. This is not possible with just a few SATA disks. The only conclusion is that fsck does not check everything, it just cheats. Second, you need to take the raid off line and wait while you do fsck.

                  ZFS Scrub checks everything, data and metadata, and that takes hours. ZFS scrub is also designed to be used on a live mounted active raid. No need to take it off line.

                  The thing is, to be really sure you dont have silent corruption, you need to do a checksum calculation every time you read/write a block. In effect, you need to do a MD5 checksum. Otherwise, you can not know. For instance, here is a research paper by NetApp whom you trust?

                  https://en.wikipedia.org/wiki/ZFS#cite_note-21

                  "A real life study of 1.5 million HDDs in the NetApp database found that on average 1 in 90 SATA drives will have silent corruption which is not caught by hardware RAID verification process; for a RAID-5 system that works out to one undetected error for every 67 TB of data read"

                  If you look at the spec sheet of a new Enterprise SAS disk, it says one irrecoverable error on every 10^16 bit read. Thus, SAS disks get uncorrectable errors. And Fibre Channel disks are even more high end, and they also get uncorrectable errors:

                  http://origin-www.seagate.com/www/en-us/staticfiles/support/disc/manuals/enterprise/cheetah/10K.7/FC/100260916d.pdf

                  Matt, again you have very strong opinions on things you have no clue about. It would be good for you if you caught up on research, otherwise you just seem totally lost when people discuss things over your head. And as usual, you never backup any of your strong negative claims, even though we have asked you to do so. Try to handle all that bitterness inside you? It is difficult to try to explain things to you, as you discard even research papers, and you continue to regurgitate things you have no clue of.

                  .

                  .

                  BTW, ZFS is free in a gratis and easy distro called FreeNAS. Just set up and forget. It is built for home server. Nas4Free it is called, maybe...?

                  1. Matt Bryant Silver badge
                    FAIL

                    Re: @Matt

                    ".....First of all, hardware raid are not safe with respect to data corruption....." Really? And ZFS has no limitations and never crashes and corrupts data? Not according to the online forums!

                    ".....Second, it is true that ZFS uses cpu cycles, a few percent of one core....." On an empty filesystem. As the filesystem fills then ZFS has more work to do just with data checking, let alone shuffling round the disks trying to find spare space with the copy-of-WAFL-throw -the-data-anywhere approach. Then suddenly ZFS is hogging the CPU and demanding masses of RAM just to stop from stalling. And don't even start about encryption as then you need so much RAM it really is beyond f*cking rediculous! ZFS is not just a system hog when the fielsystem fills, it is a system killer.

                    A streetcleaner machine can clean miles of streets. It can wash and dry and sweep and vacuum as it goes. But I have no intention of using a streetcleaner in my home. Your claims about bit rot - which I have NEVER seen in forty years of computing - are the salesman trying to sell a streetcleaner to the average housewife on the mythical chance she might need to clean a street some day. Pointless.

                    "....BTW, ZFS is free in a gratis and easy distro called FreeNAS...." Keep up. I pointed out FreeNAS, a FreeBSD-based NAS server, was a superior solution years ago when you first started bleating about ZFS. Some idiot decided to force the FreeNAS community to go with ZFS and it promptly got forked to a non-ZFS distro.

                    But here is the most telling point about ZFS - no vendor wants it, even for free. Not one single OS vendor has dropped expensive Veritas for free ZFS. Apple took a look and dropped it like a hot potato. ZFS is a hobby filesystem at best, definately not suited to the enterprise, and far too limited for me to trust it with my data. You like it then enjoy it, just quit the paid-for preaching to those of us with better solutions.

                    1. Kebabbert

                      Re: @Matt

                      "..Really? And ZFS has no limitations and never crashes and corrupts data? Not according to the online forums!.."

                      Of course ZFS is not bullet proof, no storage system is 100% safe. The difference is that ZFS is built up from the ground to combat data corruption, whereas the other solutions does not target data corruption. They have never thought about that. ZFS is not bug free, no complex software is bug free. But ZFS is safer than other systems. CERN did a study and concluded this. And there are other research claiming this too. It is no hoax.

                      Earlier, disks where small and slow and very rarely you saw data corruption. Disks have gotten larger and faster, but not _safer_. They still exhibit the same error rates. Earlier you rarely read 10^16 bits, today it is easy with large and fast raids. Today you start to see bit rot. The reason you have never seen bit rot, is because you have dabbled with small data. Go up to Petabyte and you will see bit rot all the time. There is a reason CERN did research on this: they are storing large amounts of data, many Petabytes. Bit rot is a real big problem for them.

                      Have you seen the spec sheets on a modern SAS / Fibre Channel Enterprise disk?

                      https://origin-www.seagate.com/files/docs/pdf/datasheet/disc/cheetah-15k.7-ds1677.3-1007us.pdf

                      On page 2, it says:

                      "Nonrecoverable Read Errors per Bits Read: 1 sector per 10E16"

                      What does this mean? Does it mean that some errors are uncorrectable? In fact ALL serious disk vendors say the same thing, they have a point about "irrecoverable error". The disk can not repair all errors. Just as NetApp research says: One irrecoverable error on 67TB data read. Read the paper I linked to.

                      .

                      "..Your claims about bit rot - which I have NEVER seen in forty years of computing.."

                      Let me ask you, Matt, how often have you seen ECC errors in RAM? Never? So your conclusion is that ECC RAM is not needed? Well, that conclusion is wrong. Do you agree on this? This is question A. What is your answer on Question A)? Have you ever encountered ECC RAM errors in your servers?

                      .

                      "... [you] are the salesman trying to sell a streetcleaner to the average housewife on the mythical chance she might need to clean a street some day. Pointless..."

                      Well, it is not I who did the research. Large credible institutions and researchers did this, such as CERN, NetAPP, Amazon, etc did it. I just repeat what they say. Amazon explains why have never seen these problems: The reason is because you dabble with small data. When you scale up, at large scale, you see these problems all the time. The more data, the more problems.

                      Amazon explains this to you:

                      http://perspectives.mvdirona.com/2012/02/26/ObservationsOnErrorsCorrectionsTrustOfDependentSystems.aspx

                      "...AT SCALE, error detection and correction at lower levels fails to correct or even detect some problems. Software stacks above introduce errors. Hardware introduces more errors. Firmware introduces errors. Errors creep in everywhere and absolutely nobody and nothing can be trusted.

                      ...Over the years, each time I have had an opportunity to see the impact of adding a new layer of error detection, the result has been the same. It fires fast and it fires frequently. In each of these cases, I predicted we would find issues at scale. But, even starting from that perspective, each time I was amazed at the frequency the error correction code fired...

                      ...

                      Another example. In this case, a fleet of tens of thousands of servers was instrumented to monitor how frequently the DRAM ECC was correcting. Over the course of several months, the result was somewhere between amazing and frightening. ECC is firing constantly. ...The immediate lesson is you absolutely do need ECC in server application and it is just about crazy to even contemplate running valuable applications without it.

                      ...

                      This incident reminds us of the importance of never trusting anything from any component in a multi-component system. Checksum every data block and have well-designed, and well-tested failure modes for even unlikely events. Rather than have complex recovery logic for the near infinite number of faults possible, have simple, brute-force recovery paths that you can use broadly and test frequently. Remember that all hardware, all firmware, and all software have faults and introduce errors. Don’t trust anyone or anything. Have test systems that bit flips and corrupts and ensure the production system can operate through these faults – at scale, rare events are amazingly common."

                      .

                      Ok? You have small data, but when you go to large data, you will see these kind of problems all the time. You will see that ECC is absolutely necessary. As are ZFS. That is the reason CERN is switching to ZFS now.

                      CERN did a study on hardware raid, and saw lots of silent corruption. CERN wrote the same bit pattern all the time on 3000 hardware racks and after 5 weeks, they say that the bit pattern differed in some cases:

                      http://www.zdnet.com/blog/storage/data-corruption-is-worse-than-you-know/191

                      "...Disk errors. [CERN] wrote a special 2 GB file to more than 3,000 nodes every 2 hours and read it back checking for errors after 5 weeks. They found 500 errors on 100 nodes."

                      Matt, how about you caught up with latest research, instead of trying to rely on your own experiences? I mean, Windows 7 has never crashed for me, does this mean that Windows is fit for large Stock Exchanges? No. Your experience can not be extra polated to large scale. Just read the experts and researchers, isntead of trying to make up your own reality?

                      1. Matt Bryant Silver badge
                        FAIL

                        Re: @Matt

                        "....Of course ZFS is not bullet proof, no storage system is 100% safe...." Which is why I like clustering. Now, can ZFS cluster? No.

                        "....The reason you have never seen bit rot, is because you have dabbled with small data....." Try the three largest single UNIX database instances in Europe. Actually, don't, becasue there is no chance of you working at that level. And that's the big difference - I have experience, you just have marketing bumph.

                    2. Kebabbert

                      Re: @Matt

                      "....But here is the most telling point about ZFS - no vendor wants it, even for free. ..."

                      Well, there are many who wants ZFS. Oracle sells ZFS storage servers, typically they are much faster for a fraction of the price of a NetApp server. Here are some benchmarks when ZFS crushes NetApp:

                      http://www.theregister.co.uk/2012/04/20/oracle_zfs7420_specsfs2008/

                      There are many more here:

                      http://www.unitask.com/oracledaily/2012/04/19/sun-zfs-storage-7420-appliance-delivers-2-node-world-record-specsfs2008-nfs-benchmark-2/

                      Nexenta is selling ZFS servers, and Nexenta is growing fast, fastest ever. Nexenta is rivaling NetApp and EMC.

                      http://www.theregister.co.uk/2011/03/04/nexenta_fastest_growing_storage_start/

                      Dell is selling ZFS servers:

                      http://www.compellent.com/Products/Hardware/zNAS.aspx

                      There are more hardware vendors selling ZFS, I dont have time to google up them now for you. Have work to do. FreeBSD has ZFS. Linux has ZFS (zfsonlinux). Mac OS X has ZFS (Z-410).

                      It seem that your wild false claims have no bearing in reality. Again? It would be nice if you just for once could provide some links that support your claims, but you never do. Why? Are you constantly making things up? How are expected to be taken seriously when you talk of things you dont know, and never support anything you say with credible links? Do you exhibit such behaviour at work too? O_o

                      1. Matt Bryant Silver badge
                        FAIL

                        Re: @Matt

                        "....Well, there are many who wants ZFS...." More evasion. I pointed out that not one vendor has dropped expensive Veritas for "free" ZFS and all you do is go off on a tangent. Just admit it and then go stick your head back up McNealy's rectum.

                        ".....Nexenta is selling ZFS servers...." Oooh, a tier 3 storage maker! Impressive - not!

                        ".....Dell is selling ZFS servers...." Dell will sell you an x64 and spread cream cheese on it if you wish. They also sell Linux and Windows Strorage servers, and many, many more than ZFS units. They also do not have an OS of their own and therefore have not dropped Veritas and replaced it with ZFS. Fail.

                        ".....There are more hardware vendors selling ZFS...." More evasion. I asked you for one server vendor that has dropped Veritas for ZFS and the answer is NONE.

                        "....It seem that your wild false claims have no bearing in reality...." You wouldn't know reality if it kicked you in the ar$e with both feet. You failed AGAIN to answer the point and pretend that naming cheapo, tier 3 storage players is an answer. It's not. Usual fail. Maybe before you do your next (pointless) degree you should do a GCSE in basic English.

                        /SP&L

                        1. Kebabbert

                          Re: @Matt

                          Matt,

                          No, ZFS can't cluster. This is actually a claim of yours that happens to be correct, for once. Non clustering is a disadvantage, and if you need clustering, then ZFS can not help you. But you can tack distributed filesystems on top ZFS, such as, Lustre or OpenAFS.

                          .

                          .

                          "...More evasion. I pointed out that not one vendor has dropped expensive Veritas for "free" ZFS and all you do is go off on a tangent. Just admit it and then go stick your head back up McNealy's rectum..."

                          Well, I admit that I dont know anything about your claim. But you are sure on this, I suppose, otherwise you would not be rude. Or maybe you would be rude, even without knowing what you claim?

                          But there are other examples of companies and organizations switching to ZFS. For instance, CERN. Another heavy filesystem user of large data is IBM. I know that the IBM new supercomputer Sequioa will use Lustre ontop ZFS, instead of ext3 because of ext3 short comings:

                          http://www.youtube.com/watch?v=c5ASf53v4lI (2min30sec)

                          http://zfsonlinux.org/docs/LUG11_ZFS_on_Linux_for_Lustre.pdf

                          At 2:50 he says that "fsck" only checks metadata, but never the actual data. But ZFS checks both. And he says that "everything is built around data integrity in ZFS".

                          If you google a bit, there are many requests from companies migrating from Veritas to ZFS. Here is one company that migrated to ZFS without any problems.

                          http://my-online-log.com/tech/archives/361

                          .

                          .

                          "...Oooh, [Nexenta] a tier 3 storage maker! Impressive - not!..."

                          Why is this not impressive? Nexenta competes with NetApp and EMC having similar servers, faster but cheaper. Why do you consider NetApp and EMC "not impressive"?

                          .

                          .

                          "...More evasion. I asked you for one server vendor that has dropped Veritas for ZFS and the answer is NONE..."

                          What is your point? ZFS is proprietary and Oracle owns it. Do you mean that IBM or HP or some other vendor, must switch from Veritas to ZFS to make you happy? What are you trying to say? I dont know of any vendor, but I have not checked. Have you checked?

                          .

                          .

                          "...You failed AGAIN to answer the point and pretend that naming cheapo, tier 3 storage players is an answer. It's not. Usual fail. Maybe before you do your next (pointless) degree you should do a GCSE in basic English..."

                          I agree that my English could be better, but as I tried to explain to you, English is not my first language. BTW, how many languages do you speak, and at which level?

                          Speaking of evading questions, can you answer my question? Have you ever noticed random bit flips in RAM which has triggered ECC error correcting mechanism in RAM? No? So, just because you have never seen it (because you have not checked for it) that means ECC RAM is not necessary? I mean, users of big data, such as Amazon cloud says that there are random bit flips all the time, in RAM, on disks, etc. Everywhere. But you have never seen any, I understand. I understand you dont trust me, when I say that my old VHS cassettes deterioate because of the data begins to rot after a few years. This also happens to disks, of course.

                          So, I have answered your question on which vendors have seized Oracle proprietary tech: I havent checked. Probably they dont want to get sued by Oracle.

                          Can you answer my question? Do you understand the need for ECC RAM in servers?

                          1. Matt Bryant Silver badge
                            Facepalm

                            Re: Re: @Matt

                            "No, ZFS can't cluster....." FINALLY! One of the Sunshiners has finally admitted a simple problem with ZFS! Quick, call the press! Oh, hold on a sec, it doesn't seem to have stopped him from spewing another couple of terrawads of dribbling.

                            "....Why is this not impressive? Nexenta competes with NetApp and EMC...." If I stick FreeNAS on an old desktop and hawk it on eBay am I "competing with EMC"?

                            ".....What is your point? ZFS is proprietary and Oracle owns it. Do you mean that IBM or HP or some other vendor, must switch from Veritas to ZFS to make you happy?...." Both hp and IBM are a good case in point. Both pay license fees to Symantec to use their proprietary LVM for their filesystems. If ZFS was so goshdarnwonderful as you say, and "free" to boot, surely hp or IBM would be falling over themselves to use ZFS? They aren't. Indeed, corporate users of SPARC-Slowaris still use Veritas for their filesystems rather than ZFS. There is a reason - ZFS is not as good as you think and there are other options, especially on Linux, that are far superior. So for you to come on here and blindly preach on about ZFS as if it is perfection is just going to get you slapped down by those in the know.

                            ".....Do you understand the need for ECC RAM in servers?" Completely irrellevant to the point in hand. It's like saying "oh, you have house insurance, therefore you must have ZFS!" No, I have house insurance because there is a realistic chance that I will need it, unlike ZFS. There is a demonstratable case for ECC RAM. There is not for ZFS, despite what you claim.

                            1. Kebabbert

                              Re: @Matt

                              I dont understand your excitement of me confirming that ZFS does not cluster? Everybody knows it, Sun explained ZFS does not cluster, Oracle confirms it, and everybody says so, including me. You know that I always try to back up my claims with credible links to research papers / benchmarks / etc, and there are no links that say ZFS does cluster - because it does not. Therefore I can not claim that ZFS does cluster.

                              Are you trying to imply that I can not admit that ZFS is not perfect, that is has flaws? Why? I never had any problems looking at benchmarks superior to Sun/Oracle and confirming that, for instance, that POWER7 is the fastest cpu today on some benches. I have written it repeatedly, POWER7 is a very good cpu, one of the best. You know that I have said so, several times. I have confirmed superior IBM benchmarks, without any problems.

                              Of course ZFS has its flaws, it is not perfect, nor 100% bullet proof. It has its bugs, all complex software has bugs. You can still corrupt data with ZFS, in some weird circumstances. But the thing is, ZFS is built for safety and data integrity. Everything else is secondary. ZFS does checksum calculations on everything, that drags down performance, which means performance is secondary to data integrity. Linux filesystems tend to sacrifice safety to performance. As ext4 creator Ted Tso explained, Linux hackers sacrifice safety to performance:

                              http://phoronix.com/forums/showthread.php?36507-Large-HDD-SSD-Linux-2.6.38-File-System-Comparison&p=181904#post181904

                              "In the case of reiserfs, Chris Mason submitted a patch 4 years ago to turn on barriers by default, but Hans Reiser vetoed it. Apparently, to Hans, winning the benchmark demolition derby was more important than his user's data. (It's a sad fact that sometimes the desire to win benchmark competition will cause developers to cheat, sometimes at the expense of their users.)...We tried to get the default changed in ext3, but it was overruled by Andrew Morton, on the grounds that it would represent a big performance loss, and he didn't think the corruption happened all that often (!!!!!) --- despite the fact that Chris Mason had developed a python program that would reliably corrupt an ext3 file system if you ran it and then pulled the power plug "

                              I rely on research and official benchmarks and other credible links when I say something. Scholars and researchers do so. You, OTOH, do not. I have showed you several research papers - and you reject them all. To me, an academic, that is a very strange mindset. How can you reject all the research on the subject? If you do, then you can as well as rely on religion and other non verifiable arbitrary stuff, such as Healing, Homeopathy, etc. That is a truly weird charlatan mindset: "No, I believe that data corruption does not occur in big data, I choose to believe so. And I reject all research on the matter". Come on, are you serious? Do you really reject research and rely on religion instead? I am really curious. O_o

                              So yes, ZFS does not cluster. If you google a bit, you will find old ZFS posts where I explain that one of the drawbacks of ZFS is that it doesnt cluster. It is no secret. I have never seen you admit that Sun/Oracle has some superior tech, or seen you admit that HP tech has flaws? On my last job, people said that HP OpenVMS was superior to Solaris, and some Unix sysadmins said that HP Unix was the most stable Unix, more stable than Solaris. I have no problems on citing others when HP/IBM/etc is better than Sun/Oracle. Have you ever admitted that Sun/Oracle did something better than HP? No? Why are you trying to make it look like I can not admit that ZFS has its flaws? Really strange....

                              .

                              .

                              "...If I stick FreeNAS on an old desktop and hawk it on eBay am I "competing with EMC"?..." No, I dont understand this. What are you trying to say? That Nexenta is on par with FreeNAS DIY stuff? In that case, it is understandable that you believe so. But if you study the matter a bit, Nexenta beats EMC and NetApp in many cases, and Nexenta has grown triple digit since its start. It is the fastest growing startup. Ever.

                              http://www.theregister.co.uk/2011/09/20/nexenta_vmworld_2011/

                              http://www.theregister.co.uk/2012/06/05/nexenta_exabytes/

                              http://www.theregister.co.uk/2011/03/04/nexenta_fastest_growing_storage_start/

                              Thus, FreeNAS PC can not compete with EMC, but Nexenta can. And does. Just read the articles or will you reject the facts, again?

                              .

                              .

                              "...Both hp and IBM are a good case in point. Both pay license fees to Symantec to use their proprietary LVM for their filesystems. If ZFS was so goshdarnwonderful as you say, and "free" to boot, surely hp or IBM would be falling over themselves to use ZFS? They aren't. ..."

                              Well, DTrace is another Solaris tech that is also good. IBM has not licensed DTrace, nor has HP. What does that prove? That DTrace sucks? No. Thus, your conclusion is wrong: "If HP and IBM does not license ZFS it must mean that ZFS is not good" - is wrong because HP and IBM has not licensed DTrace.

                              IBM AIX has cloned DTrace and calls it Probevue

                              Linus has cloned DTrace and calls it Systemtap

                              FreeBSD has ported DTrace

                              Mac OS X has ported DTrace

                              QNX has ported DTrace

                              VMware has cloned DTrace and calls it vProbes (gives credit to DTrace)

                              NetApp has talked about porting DTrace on several blogs

                              Look at this list. Nor HP nor IBM has licensed DTrace, does that mean DTrace sucks? No. Wrong conclusion of you. DTrace is the best tool to instrument the system, and everybody wants it. It is best. Same with ZFS.

                              .

                              .

                              "...There is a reason - ZFS is not as good as you think and there are other options, especially on Linux, that are far superior..." Fine, care to tell us more about those options that are far superior to ZFS? What would that be? BTRFS, that does not even allow raid-6 yet? Or was it raid-5? Have you read the mail lists on BTRFS? Horrible stories of data corruption all the time. Some Linux hackers even called it "broken by design". Havent you read this link? Want to see? Just ask me, and I will post it.

                              So, care to tell us the many superior Linux ZFS options? A storage expert explains that Linux does not scale I/O wise, and you need to use real Unix: "My advice is that Linux file systems are probably okay in the tens of terabytes, but don't try to do hundreds of terabytes or more."

                              http://www.enterprisestorageforum.com/technology/features/article.php/3745996/Linux-File-Systems-You-Get-What-You-Pay-For.htm

                              http://www.enterprisestorageforum.com/technology/features/article.php/3749926/Linux-File-Systems-Ready-for-the-Future.htm

                              .

                              .

                              "...There is a demonstratable case for ECC RAM. There is not for ZFS, despite what you claim...."

                              Fine, but have you ever noticed ECC firing? Have you ever seen it happen? No? Have you ever seen SILENT corruption? Hint, it is not detectable. Have you seen it?

                              Have you read experts on big data? I posted several links, from NetApp, Amazon, CERN, researchers, etc. Do you reject all those links that confirm that data corruption is a big problem if you go up in scale? Of course, when you toy with your 12TB hardware raid setups, you will never notice it. Especially as hw-raid is not designed to catch data corruption. Nor SMART does help. Just read the research papers. Or do you reject Amazon, CERN and NetApp and all researchers? What is it you know, that they dont know? Why dont you tell NetApp that their big study on 1.5 million Harddisks did not see any data corruption at all? They just imagined the data corruption?

                              http://research.cs.wisc.edu/adsl/Publications/latent-sigmetrics07.pdf

                              "A real life study of 1.5 million HDDs in the NetApp database found that on average 1 in 90 SATA drives will have silent corruption which is not caught by hardware RAID verification process; for a RAID-5 system that works out to one undetected error for every 67 TB of data read"

                              Are you serious when you reject all this evidence from NetApp, CERN and Amazon, or are you just Trolling?

                              1. Matt Bryant Silver badge
                                FAIL

                                Re: @Matt

                                "I dont understand your excitement of me confirming that ZFS does not cluster?...." Oh, I see - you're not going to deny the problem, just deny it is a problem. If it cannot cluster it cannot be truly redundant, whereas free options for Linux can. Anyone buying or building a home NAS thinking they are getting 100% reliability and data safety/redundancy should think again. Trying to pass off ZFS as the answer to all issues is not going to help these people when their NAS dies and they think "But Kebabfart said ZFS would solve all my problems?"

                                "....You, OTOH, do not...." What, now you're saying ZFS does cluster? That's the difference - I stated a fact you could not deny, whereas you just presented opinion pieces long on stats and blather but a little short on undisputed facts.

                                "....IBM has not licensed DTrace, nor has HP. What does that prove?...." That they don't need Dtrace, just like they don't need ZFS, because they have better options.

                                "....Have you ever seen SILENT corruption? Hint, it is not detectable. Have you seen it?...." Have you ever seen a GHOST? Hint, they are not detectable. Have you seen one? Hey, look - I can make a completely stupid non-argument just like Kebbie's!

                                "....Do you reject all those links that confirm that data corruption is a big problem if you go up in scale?...." NAS box, four disks. Even in my paranoid RAIDed cluster, only eight disks. Scale?

                                "....Are you serious..." Well it is hard to take anything you post with any measure of seriousness. FAIL!

        2. jonathan rowe

          Re: I don't get NAS boxes...

          OK matt, you have 2 disks in RAID 1. One disk says a bit is 0, the other says it is 1 (perhaps flipped by a cosmic ray, power surge, flipped memory bit, or an intermittent disk surface error). Which one is correct? You don't know. That's where ZFS comes in. Read up on ZFS and enlighten yourself.

          1. Matt Bryant Silver badge
            FAIL

            Re: I don't get NAS boxes...

            "OK matt, you have 2 disks in RAID 1....." Well, actually I have two sets of four disks with hardware RAID5 from proper Adaptec cards, and then software mirroring between the two chains of disks, which I couldn't do with ZFS. So far it's been up except for mirror splits for backups and fscks for three years, no bit rot. In fact I have never seen a case of the mythical bit rot you Sunshiners insist is always just waiting to happen, either professionally or at home.

            ".....(perhaps flipped by a cosmic ray, power surge, flipped memory bit, or an intermittent disk surface error)....." What, no hobbyhorse sh*t on the drive surface, surely just as likely?

            "....You don't know....." Oh but I do know male bovine manure when I hear it, and you're so full of it it's coming out your ears!

            ".......Read up on ZFS and enlighten yourself." Instead, why don't you tell me when ZFS is going to get the features like online shrink needed to match better file systems like OCFS2? I suggest it is you that needs to do a shedload more reading about the alternatives instead of just parroting the Sunshine.

            /SP&L

    3. annodomini2

      Re: I don't get NAS boxes...

      Your i3 running full windows with all those drives will draw hundreds of watts, these devices are typically draw 25-50.

      So lets say yours runs 300w, 24/7 all year. That's 2682 Kwh.

      At 50w 24/7 all year. That's 438Kwh.

      Or 6 times the amount of electricity.

      @15p/Kwh

      The nas costs £65.7/year to run vs £402.30 to run the i3.

      1. Michael Habel

        Re: I don't get NAS boxes...

        That's why Op fails...

      2. Back to school
        FAIL

        Re: I don't get NAS boxes...

        "Your i3 running full windows with all those drives will draw hundreds of watts, these devices are typically draw 25-50.

        So lets say yours runs 300w, 24/7 all year. That's 2682 Kwh."

        I have a download box using a sandy bridge dual core celeron G530, DC - DC power supply and a SSD. The system uses 17W from the wall socket idle running XP Pro and peaks at around 40w with 100% CPU load. This is a standard 65W chip with comparable idle consumption to an I3.

        The base power draw of the system is therefore 17W + drives which is the ball park for these NAS units.

        If you need big storage, buying multiple NAS units isn't a great option.

        In my view this are ok as an always on basic device with a couple of drives providing the base unit costs around £100.

        £3-500 for what's basically a simple cpu board and a box for drives is crazy.

      3. K
        Thumb Down

        Re: I don't get NAS boxes...

        Sorry, but you talk bollocks :)

        I built and use an Intel i3 based NAS box, with 5 hot swap drives. The case has a 150w power supply.. but actually draws less than 100w. The drives are Samsung 5.2k rpm ECO drives.. And no the HDD's don't power down, in fact I run VMWare on it at least 4 VM's running at all times..

    4. Michael Habel
      Meh

      Re: I don't get NAS boxes...

      And how much Juice does that thing swallow in a year?

      While your Rig sounds mighty impressive, I for One would prefer something more light-weight in the power consumption category for something that's meant to be up 24/7/365, and the odd Leap Day.

      If I could care less for Global Warming (or just plain warming my Home for that matter). I do tend to side on the side of the Greens here. (Green = Money, i.e. Money saved from having to pay for all that up-time that such a Rig would imply using a 500W+ PSU).

      So no OTOH its a quick and dirty way to get something up and running. But unless your in the 47% this is not really value for Money.

      1. Matt Bryant Silver badge
        Boffin

        Re: I don't get NAS boxes...

        "....I for One would prefer something more light-weight in the power consumption category..." Try using laptop mobos in a DIY NAS, many have an eSATA port or USB ports you can attach drives to. Laptop drives are also lower on power consumption and heat output, laptop fans are usually not that noisy, and parts readily available. And if you don't feel confident about building a DIY rig or configuring Linux you can even just use the laptop as is and configure WinXP to share out Windows volumes if all you have is Windows clients. WinXP has all the networking (and some simple security) required for such a task. You can use a laptop as a NAS with external drives and then in an emergency you have something you can use as a spare desktop should your main desktop/laptop fail. For 90% of households that's all that is really needed.

  7. Mage Silver badge

    Raid0

    What sort of useful test is that?

    I'd only buy one of these if it was good for Raid 5 or 6.

    1. MacGyver
      Coffee/keyboard

      Re: Raid0

      I agree, and as someone that looked EVERYWHERE for a nice 5-hot-swappable-external bay enclosure (with a 6th internal for the OS drive), they don't exist.

      I ended up going with a Chinese case that had 5 tool-less internal bays, and a lower tray that can hold 2 more. Anyone that makes their own NAS knows you need at least RAID5, and if you have ever lost a RAID5 NAS, you know you really should have had a RAID6 array.

      I prefer using software RAID6 (a la Linux mAdam) because it doesn't lock me into an expensive hardware card that has to be replaced by the exact same model in the event of a failure. Given the speed of CPUs (i3) and my demands (at most 4 requests at a time) software-RAID affords me the flexibility of moving my array to any flavor of Linux that supports mAdam, and OSS gives me numerous management and diagnostic tools to build/diag/repair all manor of issues that might pop up (Windows 2003 offered next to zero tools to deal with software RAIDs) . The only thing that could cause me to lose data at this point would be to lose 3 of my 5 drives at the same time.

    2. Anonymous Coward
      Anonymous Coward

      "RAID 5 or 6"

      No, no, and thrice no! Parity = bad.

      http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt

      http://www.infostor.com/index/articles/display/107505/articles/infostor/volume-5/issue-7/features/special-report/raid-revisited-a-technical-look-at-raid-5.html

      http://www.ecs.umass.edu/ece/koren/architecture/Raid/basicRAID.html#RAID%20Level%205

      1. MacGyver
        Holmes

        Re: "RAID 5 or 6"

        A RAID5 array with 4 drives (1TB + 1TB + 1TB + 1TB = 3TB) and I can lose 1 disk before I lose data.

        A RAID6 array with 5 drives (1TB + 1TB + 1TB + 1TB + 1TB = 3TB) and I can lose 2 disks before I lose data.

        A RAID10 with 4 drives (1TB + 1TB + 1TB + 1TB = 1TB) and you can lose all but one before you lose data.

        RAID10 great for speed and redundancy, and bad on storage space.

        RAID5 is ok for speed, bad for redundancy (if you lose 2 at once), and great for space.

        RAID6 is ok for speed, great on redundancy, and ok for space.

        I guess I should have added. "If you are not rich, and don't have infinite space, use RAID6, otherwise use RAID10." I would guess most people buying a sub $500 NAS don't have an infinite budget.

        1. Wilkenism
          Headmaster

          Re: "RAID 5 or 6"

          Some interesting calculations going on there!

          Pretty sure 4 x 1TB drives in RAID10 gives 2TB of usable space, not 1TB :)

          Also RAID6 is better on redundancy than RAID10 as ANY two disks can fail in RAID 6 (due to distributed parity) however in RAID10 it depends:

          Remember that RAID10 is just mirrored arrays (RAID1) inside a striped array (RAID0) if two disks fail from the separate mirrors, no biggie, the array can be rebuilt. If both disks are from the same mirror, you've lost all your data! (How often do two disks fail at the same time for small arrays like this anyway? And how unlucky would you have to be for both of them to be in the same mirrored array?!)

          But for the performance you get over either of the other implementations it might be worth the lost in capacity of both and redundancy of RAID6 (especially if you add further RAID1 levels within the RAID0 array - 3 RAID1 arrays would mean [almost] 3x the performance of a single disk!).

  8. Dwayne
    IT Angle

    iSCSI

    Any feedback which units support iSCSI? Also clarification if shared CIFS/NFS mounts are support would be helpful?

    1. AOD
      Thumb Up

      Re: iSCSI

      I can't speak for any of the other brands but the QNAP software (I have a TS-410) supports iSCSI, CIFS, NFS and a whole bunch more. It supports dynamic disk expansion so adding more/larger disks doesn't mean you lose access to your data while it does its thing.

      As for the whole "why not roll your own" argument, well to be honest, you're paying for the convenience more than anything else. The HP microservers mentioned elsewhere are nice bits of kit, but AFAIK, they don't support hot swapping drives with the stock BIOS (whereas a lot of the NAS units will support hot swapping).

      My TS-410 acts as a focal point for our movies (happily feeding multiple Apple TVs running XBMC), stores our photos (which are backed up to S3 and Crashplan) and also acts as a backup destination for our home machines.

      It also runs Sickbeard with Sabnzbd and wakes up a hibernating XBMC client via WOL to update the shared mysql media library (also on the QNAP) when something new has arrived.

      I spend most of my days solving IT related FUBARs so when I get home, I don't really want to do that all over again. The QNAP is a bit of kit that I can just leave to get on with it knowing that if there is an issue, it will either email me (assuming it can) or I can get some guidance from a helpful user community. The most serious issue I've had with it was when I found it flashing lights on two drives claiming they were degraded/not available (the unit has 4 x 2TB drives running in RAID5). Turned out it was caused by a brief power outage (and the drives were fine after a complete power cycle), following which my next purchase was a UPS to prevent a repeat.

  9. Alan Brown Silver badge

    not enough bays

    Disks are crap - and large ones can be expected to regularly provide corrupted data which their ECC hasn't picked up (statistically it's about 4 sectors on a 2Tb drive if you read from end to end). 4 drives isn't enough for decent raid levels and raid has "issues" compared with more advanced systems such as ZFS (which is designed form the ground up with the assumption that not only do disks fail, their ECC is flakey, so detects and CORRECTS such errors)

    Seriously, with the amount of stuff that people are piling into their media servers, 20Tb isn't that much anymore and for proper resiliance with large drives you need 7 of 'em to ensure good metadata spread.

    These external NASes are far too much, compared with simply shoving 4 or more drives into a low spec PC and installing FreeNAS or similar as the OS.

    1. This post has been deleted by its author

    2. Anonymous Coward
      Anonymous Coward

      Re: not enough bays

      While possibly/probably effective, your solution does not work out of the box and relies largely on self support.

      These NASes can be in use for file storage within 10 minutes of opening the box.

      And for compactness, a NAS is hard to beat, a Synology 413j Slim gives 4 disk RAID in a box "120 X 105 X 142 mm" which would sit comfortably next to the TV in the lounge or on the desk in the study.

      1. Infernoz Bronze badge

        Re: not enough bays

        These off-the-shelf NAS may be pretty and small, but they have tiny disk capacity, poor and noisy cooling, cheap PSUs, and they all use unsafe logging filesystems.

        My MIDI PC box has two hidden filtered 120mm slow quiet fans to keep 5 hard disks cool (total space for 8 dampened disks), has a cool dual core AMD E-350 Mobo with 8GB RAM, and a very over rated PSU, to hosting FreeNAS 8.3; it is quiet, attractive and only uses 50W; all at a big saving on an off-the-shelf NAS, and more capable too. I have upgraded the OS several time since the NAS was built; no rebuild required.

        My next FreeNAS box will have a lot more capacity and possibly a low power i3, given I realise that although the CPU was not stressed at high load, the I/O bandwidth probably is, so I need to go for a more powerful mobo and CPU.

        There is plenty of support and quicker too for FreeNAS, given they have full docs, a forum, and an IRC channel on-line, this easily beats most commercial support e.g. when an OS upgrade messed up remounting my RAID array, I discussed dthe issue via IRC, a fix for the issue was rolled into an update release, and I was up and running again within an hour; IMO better than phone support :)

      2. JEDIDIAH
        Linux

        Re: not enough bays

        Putting an array together is 5 minutes of work. You Google it once and you are set for the next 5 years or however long your setup manages to meet your requirements.

        Just knowing "what buttons to push" on an appliance is going to put you way beyond the skill or comfort level of most people. The shiny happy interface (or lack of one) really isn't the biggest problem here.

        4 disks just isn't enough. Not enough bays to handle redundancy or parity and hot spares and such.

        1. Anonymous Coward
          Anonymous Coward

          PARITY = BAD

          Do not use parity. You will regret it.

          http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt

          http://www.infostor.com/index/articles/display/107505/articles/infostor/volume-5/issue-7/features/special-report/raid-revisited-a-technical-look-at-raid-5.html

          http://www.ecs.umass.edu/ece/koren/architecture/Raid/basicRAID.html#RAID%20Level%205

        2. Alan Edwards

          Re: not enough bays

          > Putting an array together is 5 minutes of work.

          And then the thick end of a day for it to actually build the array :-)

    3. Matt Bryant Silver badge
      Facepalm

      Re: not enough bays

      "....with the amount of stuff that people are piling into their media servers, 20Tb isn't that much anymore...." So stick it on the cloud and let someone else look after it on proper arrays, which make ZFS look like the toy software it is. ZFS can't cluster and offers SFA resilience as it can't even work properly with hardware RAID. Seriously, get a high-speed internet connection and leave the media on iTunes, Amazon, StorageMadeEasy, Microsoft or some other cloud where it will be protected and replicated between massive datacenters, and probably at less cost than buying a four-slot NAS every couple of years and backing it up yourself. 99% of the cruft stored on home NAS units could be stored on the cloud with a little thought and planning, even by as simple a method as emailing it to yourself in Hotmail. If you're feeling paranoid then encrypt it before you store it but you will have to accept the penalty of having to decrypt it before you can use it again.

      1. Paul Crawford Silver badge

        Re: not enough bays

        Seriously, you think that a home/small business internet connection can support access to 20TB of data in the cloud?

      2. Anonymous Coward
        Anonymous Coward

        Re: not enough bays

        You really are either a clueless moron or a piss taking enterprise BOFH:

        1. Yes people do needs lots of space now, especially SMEs; no they won't pay enterprise prices, ever!

        2. The cloud is WAY too expensive and slow. for 20TB data; and the costs and risks will shock you!

        3. The internet is hideously slow even on 80Mbit fibre for this volume of data, and congestion and latency can be horrible compare to a local NAS.

        4. Mailing multiples of your mailbox capacity to yourself in Hotmail; you must be on Class A drugs!

        5. ZFS is pretty much as good as it gets, and free in FreeNAS; I know I use it a lot!

        I won't even discuss the rest, it's completely irrelevant, especially enterprise level stuff like clustering!

        1. Matt Bryant Silver badge
          FAIL

          Re: Re: not enough bays

          "You really are either a clueless moron or a piss taking enterprise BOFH...." Well, abit of the latter really - I work with enterprise kit but have completely different requirements at home. And I do like taking the piss out of morons like you.

          "....1. Yes people do needs lots of space now, especially SMEs; no they won't pay enterprise prices, ever!..." So they don't. They buy stuff like the Microserver mentioned. If their business grows they move up to the SMB ranges from people like hp or Dell.

          ".....2. The cloud is WAY too expensive and slow. for 20TB data; and the costs and risks will shock you!...." It's called storage tiering, it works for individuals as well as big corporations. Stuff of low importance - back it up to writeable DVD; stuff of high importance - stick it on the cloud. Who said anything about 20TB?

          ".....3. The internet is hideously slow even on 80Mbit fibre for this volume of data, and congestion and latency can be horrible compare to a local NAS....." Yes, but do you look at every item on your NAS and require it instantly? Most people I know actually treat their home NAS more as an archive - stuff they have finished with gets shifted off their laptop/desktop to be stored on the NAS. If you need constant access then a home fileserver would probably be a better idea than a NAS.

          ".....4. Mailing multiples of your mailbox capacity to yourself in Hotmail; you must be on Class A drugs!...." Storage tiering - it's an easy way to store important docs, I can send myself encrypted material if I'm worried about MS (or hackers) taking a peek, and I can access them from just about any device with Internet connectivity from anywhere in the World. For example, I keep scans of my passport and other travel docs in an encrypted and compressed file in Hotmail, and it was a lifesaver when my hotel room was burgled in Beiruit. I've been doing it roughly since Hotmail was launched. You can also be naughty and run several Hotmail accounts to spread the load and ensure one hacked account doesn't mean you lose everything, just don't call them something obvious like joefilestore1@hotmail.com, joefilestore2@hotmail.com..... And Hotmail now comes with free online Office for editing if you're really stuck somewhere with nothing but a smartphone. Try a little thinking outside the box before you start shrieking about drug-use.

          "......5. ZFS is pretty much as good as it gets, and free in FreeNAS; I know I use it a lot!..." Ah, I see your rabid and frothing response is not based on any calm and rational thought as much as a Sunshiner desire to defend your Holy ZFS. I can't help it if your love of ZFS makes you blind to better and simpler solutions, and - frankly - I couldn't give a damn if you're too stupid to consider other options. Your loss.

          "......I won't even discuss the rest, it's completely irrelevant, especially enterprise level stuff like clustering!" Really? Why not? Because your product can't do it. I can make two cheapo Linux servers and set up clustering between them. I can do the same with Windows. But you can't do it so you refuse to discuss it. True, the average home user won't think of it, they may actually think that buying a NAS means they have resilience and 100% data availability. I work with enterprise kit so I tend to think the more resilience the better, and seeing as I have access to lots of excess kit whenever we hit the three-year refresh cycle, it's pretty easy for me to implement at home. It's like the saying goes, ask a London cabbie what the best family car is and he won't say a BMW or Ford, for him it's a black cab. For you it's obviously a soapbox kart, but that's your problem.

          /SP&L

      3. Kebabbert

        Re: not enough bays

        "... ZFS can't cluster and offers SFA resilience as it can't even work properly with hardware RAID..."

        Matt, matt. As I tried to explain to you, hardware raid are not safe. I have showed you links on this. And NetApp research says that too, read my post here to see what NetApp says about hardware raid. There are much research on this. Why dont you check up and read what the researchers in comp sci says on this matter, instead of trusting me?

        OTOH, researchers say that ZFS protects against all the errors they tried to provoke, and concluded that ZFS is safe. When they tried to provoke and inject artificial errors in NTFS, EXT, XFS, JFS etc - they all failed their error detection. But ZFS succeeded. There are research papers on this too, they are here (papers numbered 13-18):

        https://en.wikipedia.org/wiki/ZFS#Data_Integrity

        .

        And you talk about the cloud. Well, cloud storage typically use hw-raid which, as we have seen, are unsafe. And the internet connection is not safe too, you need to do a MD5 checksum to see that your copy was transfered correctly. You need to do checksum calculations all the time. Just what ZFS does, but hw-raid does not. Therefore you should trust more on your home server with ECC and ZFS, than a cloud. Here is what cloud people says:

        http://perspectives.mvdirona.com/2012/02/26/ObservationsOnErrorsCorrectionsTrustOfDependentSystems.aspx

        "...Every couple of weeks I get questions along the lines of “should I checksum application files, given that the disk already has error correction?” or “given that TCP/IP has error correction on every communications packet, why do I need to have application level network error detection?” Another frequent question is “non-ECC mother boards are much cheaper -- do we really need ECC on memory?” The answer is always yes. At scale, error detection and correction at lower levels fails to correct or even detect some problems. Software stacks above introduce errors. Hardware introduces more errors. Firmware introduces errors. Errors creep in everywhere and absolutely nobody and nothing can be trusted...."

        .

        Matt, read and learn?

        1. Matt Bryant Silver badge
          FAIL

          Re: Re: not enough bays

          "... As I tried to explain to you, hardware raid are not safe..." Usual Kebabfart - lots of blather, lots of evasion, no answers to the point raised. Come on, just admit it, you can't cluster ZFS, it introduces a big SPOF into any design. For hobby NAS, provided you can afford to pay for rediculous amounts fo RAM and CPU, it might be passable, but there are far better solutions that can work with lots less hardware AND can be clustered if required.

          Someone forgot to tell you, Sun is dead. Stop trying to flog a dead horse, they won't give you anymore paid-for blogging awards.

    4. petur
      Stop

      Re: not enough bays

      What makes you think there are only 4-bay models?

      A quick look at the QNAP site shows they have models with 1,2,4,5,6,8,10 and 12 bays. The latter ones running with beefier intel cpu, not atoms.

      And as for building one yourself: sure, why not, just like you can build your PC yourself. Some people prefer that, others go for a pre-build model. Another advantage of these NAS boxes is they are *very* compact, and most certainly use lower power than anything you build yourself. And no worry about hardware compatibility, the OS that comes with them supports its hardware, something that isn't automatically so for build-your-own boxes.

      1. JEDIDIAH
        Linux

        Re: not enough bays

        Intel parts aren't nearly as power hungry as they used to be. Power management is a lot better across the board. So there are fewer and fewer reasons to shell out the cash for an appliance.

        ...and while there are more "robust" appliances, those are even more rediculously overpriced than the small ones that are the subject at hand.

  10. Woodnag

    ZFS

    As various posters have mentioned, ZFS (available as part of FreeBSD) is the only reliable FS available for free. Trouble is, Sun hasn't open sourced the version with native encryption yet, and alternatives (GELI) are frankly a PITA.

    Honestly, setting up and using FreeNAS on some old machine with ZFS is dead easy, gives plenty of early warning when a drive is going dubious. However, you do need 8GB of RAM as practical minimum.

    1. Ramazan
      Alert

      Re: native crypto

      You should never use FS level crypto - opt for PV level one instead (only /boot is open, everything else including swap partition is encrypted with passphrase no shorter than 24 characters).

  11. Andy E
    WTF?

    Thecus? Recommended? Misguided Fool!

    I was interested in the artical up untill I got to the bit where Thecus was recommended. I own a Thecus N2200Plus box and it is utterly crap. Most of the features don't work, the support from Thecus is appaling. The support forums are littered with peoples distrss stories and I have personal experience of loosing data when the Raid array just stopped working for no apperent reason.

  12. Peter Galbavy

    ReadyNAS v2 for 280? Really

    Just bought one from Amazon to replace my original Infant NV+ for £145 empty...

  13. petur
    FAIL

    Weird NAS selection

    Certainly on the QNAP part, as both models are in fact LOW-end models, not high-end as the review says... If they had taken a TS459 or even TS-469 it would have blown the competition away (my TS-269 saturates gigabit (100MB/s) and needs dual lan + beefier switch to deploy its full potential).

    Given the selection of models, it is easy for the reviewer to steer the outcome of the article.

    1. Mark 65

      Re: Weird NAS selection

      It does seem strange that the choice of QNAP appliance wasn't the same level in the range as the Synology one. They are generally more expensive though so maybe that had a bearing but I agree that the 459 would have been a better choice and achieves over 100MB/s writes. I used to have the tower system running linux but moved to a QNAP as it can sit in the lounge and is small and quiet in operation. I like the appliance nature. My decision may have been different if the HP server people have was available then.

  14. Paul Crawford Silver badge

    Data integrity?

    One critical issue in my view is data integrity. That is what a NAS it supposed to do, store data reliably. But the article fails to address that. Do they support internal file systems that have data checksums (like ZFS)?

    If not (and important even with ZFS) do they support automatic RAID scrubbing where periodically all of the HDDs are read and checked for errors in the background.

    Most folk at home will only have 1 HDD of protection (RAID-1 or RAID-5) and what happens later in life is a HDD fails, you replace it and find bad sectors on the other disk(s), thus corrupting the valuable data. With two HDD of protection (e.g RAID-6 or ZFS' RAID-Z2) you can cope with one error per stripe of data while rebuilding, but that is not always enough.

    That is why you want to check once per fortnight/month that the HDD are all clean, and so so allow the HDD to internally correct/re-map sectors that had high error rates when read, and if necessary to re-write and uncorrectable ones from the RAID array if that fails.

    Of course, sudden HDD failure happens, maybe even multiple HDDs, or PSUs, as does "gross administrative error", which is why you should all repeat "RAID is not a backup" twice after breakfast...

    1. petur

      Re: Data integrity?

      I think most NAS models support disk checks. My QNAP monitors SMART and can be scheduled to do quick but also extensive disk tests looking for bad blocks.

      Sadly no ZFS (yeT)

      1. Paul Crawford Silver badge

        @petur

        The problem with simply monitoring the SMART status is it won't know about bad sectors until you try to read them. Often by then it is too late.

        Smart has support for a surface scan, and while that allows marginal ones to be re-written, it just report any uncorrectable/re-mappable sectors as bad and you won't generally know about that until a HDD fails and you need to re-build the array.

        Hence the advantage of the RAID scrub process:

        1) It accesses all of the HDD sectors (or all in-use ones in the case of ZFS), forcing the HDD to read and maybe correct/re-map any that are marginal, just as the SMART surface scan will do.

        2) For any that are bad, it, by virtue of being in a RAID system, can then re-write any bad sectors with the data from the other HDD(s) and that will normally 'fix' the bad sector (as the HDD will internally re-map a bad one on write, and you still see it as good due to the joys of logical addressing).

        Recent Linux distros like Ubuntu will do a RAID scrub first Sunday of the month if you use the software RAID, which is good. But I don't know of any cheap NAS that pay similar attention to data integrity.

        Not counting RAID-0, OK?

      2. Mark 65

        Re: Data integrity?

        Not sure if the QNAPs will ever get ZFS as I believe its memory requirements for good operation exceed what most boxes will have - I believe 1GB per TB of storage is recommended with typically 8GB min. for good performance. My TS-439 has 1GB as do most others.

  15. jonathan rowe
    Thumb Up

    microserver N40L

    Don't waste your time with any of these, an N40L with 8gb of ECC RAM and an intel NIC (N40L built-in does not do jumbo frames) will wipe the floor performance wise. It has an internal USB slot onto which you install FreeNAS and then you get ZFS.

    ZFS + RAIDZ2 + ECC memory - don't trust your precious data to anything less.

  16. Kevin McMurtrie Silver badge
    Thumb Down

    For when the world isn't perfect

    I use NAS for backups so I like to see some protection against the usual problems.

    What happens when a power failure interrupts writes? What happens when the NAS is in redundant mode and a disk fails? Does it send an e-mail, blink an LED that will never be seen, or pretend like nothing is wrong? What happens when a failed drive is replaced? Can bundled drives be replaced under warranty without long downtime? There are plenty of NAS out there that claim RAID 5 protection but are unusable for days when something goes wrong. I recall and old D-Link and a more recent LaCie 5big that needed to be wiped clean and shipped for warranty drive replacement. Even if they had simply sent me a new drive, they would have needed days to rebuild too. I don't like being without backups for days/weeks so I end up buying a different brand of NAS and giving away the old one when it comes back. What a waste of money.

    1. Mark 65

      Re: For when the world isn't perfect

      QNAPs will email alerts, same goes for Synology I would imagine. As for power interruptions, if you worry about your data enough to be using a RAID equipped NAS then I suggest you spring for an APC UPS that can send notifications via its USB connector that the NAS will act upon (configurable in the GUI). I used to have a UPS on my PC before I bought the NAS to guard against power failures as it seemed only sensible. Array rebuild time will be a function of the processor as it's doing a fair amount of work. 2TB disk replacement caused a rebuild taking hours on a QNAP rather than days. It will also real-time sync to an external backup, send data to Amazon S3, Elephant drive or sync to another remote NAS. Both companies have built-in SSH amongst other things on their appliances.

    2. Mark 65

      Re: For when the world isn't perfect

      FYI - smallnetbuilder.com is the site to checkout on these matters.

  17. Sean Timarco Baggaley
    WTF?

    @ZFS Fanboys:

    Yes, we get it, ZFS does some neat stuff. Guess what? Most people (myself included) find it easier to just run a regular (in my case, weekly) backup of important data to an external drive connected to the NAS box via USB. (I also make a weekly clone of my computer's drive on the same day. Job done.

    As for why I bought a ready-build NAS appliance: I did so for the same reason I prefer to live in ready-built homes. My time is worth money. I'm worth £300 day as a technical author. (And that's cheap. Some charge as much as £700 / day.) I'm not a fan of UNIX in any of its flavours, so setting up even a FreeNAS box isn't something I enjoy. I'd spend hours perusing the Web to find out the best practices, the arcane spells that need to be typed into the shell, and so on. On top of which, I'd also have to order all the parts and wait for them to be delivered.

    Why the hell would I waste £600 or more of my time (and days of my life) working on a device I can just buy off the shelf for less than half that, and which would be up and running within minutes of my taking it out of the packaging?

    Just because YOU enjoy a bit of DIY in your preferred field of expertise, it does not follow that everyone else does too. My background is in software, not hardware. I know how the latter works, and I've built dozens of PCs over the years – mostly for relatives and friends – but it is not something I find particularly rewarding.

    I have no more interest in building my own NAS boxes and laptops than I do in building my own home or car. The time required for the DIY approach is not 'free' unless you actually enjoy doing that sort of thing as a hobby. I don't, so, as far as I'm concerned, it's time wasted on doing something boring and irritating instead of time I could be earning doing something fun and rewarding.

    1. jonathan rowe

      Re: @ZFS Fanboys:

      Sean, believe me if any of these NAS boxes used ZFS (or BTRFS or the new windows FS) I would buy one at the drop of a hat, but my data is just too important to put to chance. I am glad you backup, but if the data on the disk goes bad, then so do all your backups - the problem is that you don't know that your data is corrupt until it is too late and all your backups have been 'polluted' with bad data.

      You can do a freenas setup in about half a day - there are no arcane spells involved at all, a modest investment compared to the immeasurable expense of losing important data or worst still, not knowing that you have lost important data when your NAS box says 'yep, all hunky dory'.

      1. jockmcthingiemibobb
        Thumb Up

        My vote also goes on the Readynas or Synology kit. Can't say anything good about Thecus. I've got an early 2006 Readynas (Infrant before Netgear acquired them) that hasn't missed a beat despite numerous power cuts. Switches itself on faithfully every morning (except the weekends), switches itself off at night, uses bugger all electricity, supports Active Directory, FTP's its backups and sends an email alert if the backup location is missing. I could WASTE my time knocking up a homebrew NAS but these things are now cheap as chips and most businesses are after something with proven reliability, after-sales support and a long warranty. If noise is an issue then a stock Thermaltake silent(ish) case fan goes straight in.

        Any why test a 4 bay NAS with Raid 0?

    2. This post has been deleted by its author

    3. Stoneshop
      FAIL

      Re: @ZFS Fanboys:

      the arcane spells that need to be typed into the shell, and so on.

      With most distros, at some point in the install process (which is just as graphical as Windows') you get to the point where it wants to know what disks it can use. You click the appropriate disks, you click that you want those in a RAID set, you click that you want the lot formatted, and there you are.

      On top of which, I'd also have to order all the parts and wait for them to be delivered.

      You can do other things between ordering and the parts being delivered, you know. Which is roughly the same length of time as a complete system, or a NAS box being delivered. And less time than involved with driving to a shop, finding that they don't have the kit you need, driving to another, finding they do, but only with smaller disks which you (or the shop's techie) needs to replace.

      1. JEDIDIAH
        Linux

        Re: @ZFS Fanboys:

        > the arcane spells that need to be typed into the shell, and so on.

        Like what RAID level you want? Yeah, I could see how that could be a bit of a burden. Then again, that crowd probably isn't even aware of NAS appliances at all.

    4. Robert Grant
      IT Angle

      Re: @ZFS Fanboys:

      "...I'm worth £300 day as a technical author...the arcane spells that need to be typed into the shell..."

      TMBSDOTW "technical" OWIWNPA

  18. This post has been deleted by its author

    1. This post has been deleted by its author

  19. Martin an gof Silver badge
    Meh

    Superficial testing

    I too thought it was odd that the boxes were tested in RAID0. Especially for a 4-bay box, RAID0 is just asking for trouble and - to my eyes - the fact that most of them achieved similar data transfer rates implies that there's a networking bottleneck, but I have also found that speed can vary enormously with the file system in use. It would have been nice to see comparative performance for at least RAID5, which is probably the best compromise at home for a unit like this.

    We have two QNAP devices at work, one ARM-based, the other Atom-based. Both have four discs in RAID6 and the difference in processor power really shows, especially on writes. The ARM-based processor quickly hits 95% or more according to the GUI's meter and manages perhaps a third of the throughput that the Intel box does. RAID6 is particularly heavy on the processor because of the need to calculate a second, somewhat complex, checksum.

    I built a FreeNAS box at home based on an AMD 450 (seriously considered the HP microserver but was put off by needing to throw out the RAM and wanting to use 2.5" discs rather than 3.5") and for work I built a third NAS based on an AMD A4 chip. That has space for 16 2.5" drives (in some very nice caddies which let you slot 4x2.5" discs into a 5.25" bay) and 16GB of RAM. Parts cost (without discs or case) was about £650 IIRC, but this includes three additional SATA cards and the drive bays. On those terms the hardware was a couple of hundred quid cheaper than the larger QNAP. In terms of read and write speed to its current 8 drives in Z2 (equivalent of RAID6) it wipes the floor with the QNAP devices, but I am having some problems with (I think) the onboard LAN. Should have spent an extra few quid on a nice NIC, but that's something I can do later.

    Should be working. Better go :-)

    M.

  20. Anonymous Coward
    Anonymous Coward

    What is never tested : warranty and service.

    I've had a Buffalo BAs sitting on a shelf somewhere for over 2 years. It was bought in a pinh by a colleague in Japan and brought over on a project.

    The power supply failed at some point. I have emailed and phoned with Buffalo in three continents, and with their main suppliers an dealers all around, AND I've even posted on their forum, asking to send me a replacement power supply. I have offered to pay for it and the shipping.

    I have been given the run around from japan to the US to Europe, and ultimately I've been sent packing. They can not provide me with a replacement PS.

    So before you part with your hard earned, take into consideration that there's some things that are never tested in a product review.

  21. Anonymous Coward
    Anonymous Coward

    I want a NAS that supports Windows HomeGroups*

    Homegroups are how I share resources at home, and I find that it works very well. In fact, it's a great feature if you've got Windows 7 or above.

    But where is a NAS appliance that runs Windows 7 (or 8)?

    If a NAS ran Windows 8, I'd get ReFS and storage Spaces too. All very nice.

    I do get it that many here at the Reg will think this is too 'consumer grade' for IT professionals. But these NAS boxes are for *home* use. Why should I have to make a low power PC with a few USB-attached drives to build my own NAS. I want an appliance that does this for me, but without losing basic functionality that I get with a PC.

    * No, not workgroups, homegroups.

  22. Chz
    Thumb Up

    Good overview

    A decent Reg review, at least. But I have to say the speed on the TS-419 looks a little low. I have its TS-219 twin and I get much faster speeds than that.

    And yes the Microserver is a great option, but some people just want a small, quiet, power-sipping box that they turn on, stuff in a corner and leave. Get over it.

  23. CAPS LOCK
    Happy

    Anyone thinking of going the DIY route needs to be aware of the FreeNAS seven series fork Nas4Free. I'm a satisfied user: http://www.nas4free.org/

    At least check it out.

  24. Mark Leaver
    Thumb Up

    Qnap are quite good

    I have a qnap ts-410 4 drive NAS. I had one glitch with the drives in it, when I got it a couple of years back, in that I was using drives that werent recommended. However, I did a scan on the drives and everything came good again and it is now going great guns. I also have a D-Link DNS-323 2 drive NAS and I quite like the D-Links as well. However, the Synology has an added feature where you can add in a 2 or 5 drive expansion box and just plug it into the back of their NAS's and it allows you to add to your existing structure without having to setup a whole new NAS. If I was starting all over again, I would either go for a HP Microserver and FreeNAS or I would go for the Synology option.

  25. Zadkiel

    useless

    considering this is a site with a target audience of IT professionals, this review is all but pointless.

    a) these should all be rack mount servers, not silly square boxes that have their place sitting on desks at home

    b) no mention of support for business NAS requirements, such as ISCSI, interface teaming, encryption, Auth via AD, LDAP etc...

    1. diodesign (Written by Reg staff) Silver badge

      Re: useless

      "these should all be rack mount servers"

      The Reg caters for a big range of hardware - from serious consumer to IT pro. If you want rack-mounted enterprise-grade kit, take a look in the servers and storage sections:

      http://www.theregister.co.uk/data_centre/

      C.

    2. Steve I
      Coat

      Re: useless

      "considering this is a site with a target audience of IT professionals"

      This is The Reg - I don't know what the 'target' audience is but the actual audience is a bunch of schoolkids playing technological Top Trumps and arguing about who's adopted mega-corporatiion is the best.

  26. Anonymous Coward
    Anonymous Coward

    How expensive?

    Alternative cost-effective solution: Raspberry Pi, £30.

    1. Danny 14
      Facepalm

      Re: How expensive?

      4 bay USB nas on a pi? That'll be quick with RAID 6 I bet.

      1. Matt Bryant Silver badge
        Boffin

        Re: Re: How expensive?

        "....That'll be quick with RAID 6 I bet." Probably not, especially with a rebuild, but it could make for a cheap disk archive rather than high-speed NAS. If you could cluster two Pi systems you could even build a redundant archive in a small(ish) box with very modest power and cooling requirments.

  27. msage
    Happy

    ReadyNAS

    Another advantage of the ReadyNAS (certainly the Pro and Ultra versions) that doesn't appear to be mentioned is they are on the VMWare HAL, therefore, if you have a home ESX lab you can use it as an iSCSI or NFS datastore, that I believe is not something you would get from building your own NAS (although I have used FreeNAS as a ISO datastore on ESX).

    Michael

  28. Anonymous Coward
    Anonymous Coward

    Authentication? Security?

    Did we not think it would be relevant to actually discuss the functionality of the reviewed units. For example, what kinds of authentication and access control are supported?

    Most NAS appliances / features aimed at home use only support giving unfettered access to everybody on the network. A compromise I'm sure most of us would not find acceptable.

    Yes I want my files to be accessible on various devices, no I do not want the risk of it all being deleted with a single erroneous click or keypress.

    1. Anonymous Coward
      Anonymous Coward

      Re: Authentication? Security?

      Synology boxes do username/password control on shared folder access as well as semi-automatic creation of personal (private) folders and can apply quotas.

      You can also define which users can access the management interface.

      I'd be surprised if the other boxes didn't do similar

      1. Anonymous Coward
        Anonymous Coward

        Re: Thanks

        Thanks, that's useful - but my point was really that a "review" should have at least mentioned these things.

  29. Sky

    Synology user report

    I have one of the fewer-bay Synology NAS devices (in the performance-enhanced '+' version), and the fan is fairly quiet (it runs non-stop in a main room of the house, and is not concealed behind a TV or anything). It is so quiet that I hear the hard drive when there is access.

    I've been running it non-stop for probably two years now, so for me reliability is good. I don't have to perform any housekeeping or maintenance on it.

  30. Steve I

    HP Microserver...

    Just looking to get a NAS of some some - would the HP be a good buy? Currently available for about £100 with cashback.

    Looking to host & share photos, videos and music (in an iTunes library) via iTunes home sharing & DLNA to an DLNA capable AV amp and TV and to backup multiple Macs via Time Machine.

    Recommendations on FreeNAS or some flavour of Unix/Linux?

    Does FreeNAS play nicely with DLNA? Anyone configured the HP for Time Machine backups using AFP?

  31. CJ Hinke

    In a word, LaCie

    While I realise this article only reviewed four-bay NAS boxes, it missed the clear winner, by any standard. LaCie's 5big Network 2 NAS box comes with five 3TB disks. Flawless operation, plenty of ports, and only $1200 for 15TB.

    Mine's full already! http://www.lacie.com/products/product.htm?id=10485

  32. Avatar of They
    Meh

    Missed a treat

    You can get the HP microserver (with 100 cashback), windows home server OS and 4 x 2TB for £440 or there abouts. Which is cheaper and better than most of the offerings in this list.

    You have ones over a grand for a few extra whistles which are minority stuff for a lot of users who just want managed backups and data security.

This topic is closed for new posts.

Other stories you might like