back to article Scale out sister: Open sorcerer pulls v3 Gluster cluster out of Red Hat

Open sorcery evangelist Red Hat has updated its Gluster clustering Red Hat Storage Server to v3, adding capacity and cluster nodes. RHSS v 3.0 is based on the GlusterFS 3.6 file system with RHEL 6.0. It is designed for scale-out file storage and is built from, of course, open source software. A datasheet (PDF) says: “Red Hat …

  1. Anonymous Coward
    Anonymous Coward

    I suspect Windows Server 2012 R2 is still a much faster NFS 4.1 Server. And is somewhat easier to cluster and manage. And includes tiering / dedupe.

    1. Nate Amsden

      don't like windows NFS

      windows 2012 nfs sucks bad. I thought it may be a ok solution for my sub TB of NFS data (migrated from Nexenta ZFS) but it's been almost nothing but issues. I haven't tried R2, but HP says there is really nothing related to NFS in R2 that makes it worth upgrading or will fix any of the outstanding problems which MS is likely never to fix including:

      - get an I/O error often times on accessing a snapshot for the first time(over NFS), 2nd time works fine (still working a support case on this - sounds like another issue which won't get fixed).

      - when de-dupe is turned on, the command 'du' does not return accurate information the first time, it typically returns less than 20kB for files that are multi GB in size, forcing the NAS to re-read the file results in correct size calculations(for about 60 seconds then the incorrect sizes are reported again). Known issue with MS and dedupe, no fix, so I disabled dedupe on most of the volumes.

      - dedupe operates at a block size of 32kb. Pretty coarse. The first volume I migrated had literally 20+ copies of 5,000+ images many coming in and 20kB and less, dedupe didn't do so well (didn't realize the 32kB thing until after this migration). Dedupe is also not inline. So I figure once our deduping 3PAR 7450 is in place I will shut off dedupe on windows and just use the 16kB inline dedupe on 3PAR instead.

      - cluster is FAR too sensitive to DNS configuration.

      - Have an issue where if I add any new volumes to the cluster they will by default overtake the existing E: volume, knocking it's drive letter away and messing up the NFS (and I assume CIFS) shares. Another known MS issue not likely to get fixed but at least I know to be very careful and can prevent the system from making this mistake in the future.

      - I could write something myself I guess but the snapshot scheduling in windows sucks, it seems to schedule but then the retention period is "whenever I happen to reach the max # of snapshots which I think is 64". I want to define retention periods like hourly snapshots retained for 24 hrs, daily snapshots retained for a week, etc.

      - NTFS not case sensitive by default - this one took support a couple weeks to track down why I kept getting errors with rsync, because I had some directories that had the same file names with multiple case formats in them - once we disabled this via registry it worked better.

      Probably logged a good 18 hours of support calls on this windows 2012 storage cluster for NFS to-date, and I'm not yet fully deployed.

      One good thing about Windows 2012 though is at least I can write zeros to the volume and reclaim space on the backend - couldn't do that with Nexenta(the zeros would get compressed and not sent to the backend).

      Maybe it works great for CIFS, but for NFS stay far away from windows 2012 storage server. My requirements are REALLY basic, file serving to servers, less than 500GB of data(currently), snapshots, and high availability. Oh also something that is supported (3PAR support matrix is very strict). I didn't want to pay very high $$ for sub 1TB of data too. I love Linux and have been using it for 18 years but the state of file systems in linux means there isn't solution that is good enough out there for me (e.g. no snapshots - no not going to use btrfs or zfs in production - though I do use zfs on linux at home). That and a lot of the linux HA involves something like drbd which is just stupid wasteful when operating with a high availability SAN in the backend.

      Inline compression would be nice too. NFS v3, don't care about any NFS version other than 3. I had heard good things about Windows server NFS in recent years and HP claims they have a lot of customers using it with success (I have been in direct contact with the product manager for the product for many months now).

      Maybe it's just me - I do find a way to break things in unusual ways.

      HP announced integrated file serving support for 3PAR last year, maybe when that comes out it will be good, I plan to deploy it at another smaller site whenever it becomes available.

      Nexenta was bad, this might just be worse though, at least I get supported high availability with this, with Nexenta we got it originally to run HA within VMware (Nexenta VMs), they said they supported it, we tested it and it worked - but when it came time to really test it, it failed badly, lots of data corruption and reboot loops (until we disabled HA - not many problems since but I don't trust it anymore).

      Red hat storage server doesn't meed my needs either, so that's out.

      1. thames

        Re: don't like windows NFS

        Responding to astro-turfing with actual facts and experience? That's no fair! How is our poor astro-turfer supposed to meet his contract terms and put bread on his table if you know what you're talking about?

      2. Vic

        Re: don't like windows NFS

        e.g. no snapshots - no not going to use btrfs or zfs in production

        I use LWM for snapshots - that way, it's fs-agnostic. Works wonderfully...

        Vic.

    2. Tom Samplonius

      "I suspect Windows Server 2012 R2 is still a much faster NFS 4.1 Server. And is somewhat easier to cluster and manage. And includes tiering / dedupe"

      Windows 2012 NFS is not in the same category. Gluster supports NFS only for compatibility. Gluster isn't intended to be a NFS server. It is intended to be used as large scale block storage via its own API. How many super computers store their files on Windows 2012 NFS? None. How many super computers use Gluster? Almost all.

      But now, the world likes virtualization, so storing VM images in Gluster seems like a good thing to do. KVM and Xen will get native Gluster support in their respective hypervisors shortly. And big cloud operators like the idea of using Gluster over some under performing hardware from NetApp or EMC.

  2. Matt Bryant Silver badge
    Go

    Good news!

    Title says it.

  3. elip
    Thumb Up

    this ain't NFS chief.

    Don't know what "value add" enterprisey crud Red Hat throws on top of glusterfs, but for what it's worth, about 6-8 years back I ran a small test bed storage cluster - about 6 Dell x86 boxes running Solaris 10 with a bunch of internal disks, and multiple aggregated 1Gb links in each box. Setting up a cluster was dead simple: as I recall, two commands on each box for the clustered volume creation. File system on the disks was ZFS. I threw a bunch of loads at that clustered file system, and it didn't break a sweat even though it all runs in userland over Fuse (at least it did back then). I was very impressed with the results, sequential performance was *always* equal with line speed of the aggregated links, and adding/subtracting boxes to/from the cluster worked flawlessly. I wouldn't hesitate to use Gluster in production for backup-to-disk, or large software repos (that's what we used it for).

    1. Tom Samplonius

      Re: this ain't NFS chief.

      "Don't know what "value add" enterprisey crud Red Hat throws on top of glusterfs..."

      Redhat is Gluster, so they don't throw stuff on top, when it is all Redhat all the way through.

  4. Mark #255

    19 petabytes?

    Think of all the pr0n and t0rr3nts legitimately-acquired movies, games and music one could store in that.

    Thing is, would it fit in my under-stairs cupboard?

  5. Justin Clift

    Looking for testers for GlusterFS 3.6.0 native OSX client bits...

    For anyone with some spare time over the next few days, we're (upstream GlusterFS Community) looking for testers for the recent 3.6.0 beta3 release.

    This is the first release with native MacOS X FUSE client support. OSX users can access GlusterFS volumes directly now, without needing to use NFS, Samba, etc.

    MacOS X Homebrew formula for it:

    https://github.com/justinclift/homebrew/blob/glusterfs360/Library/Formula/glusterfs.rb

    To test it, setup GlusterFS 3.6.0 beta3 on Linux or BSD, create some volumes, then use the OSX FUSE client to work with files on them. Let us know via the mailing lists if any weirdness happens for you. (in theory, it shouldn't) :)

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like