The first iteration of the Gluster clustered file system that is going through the Red Hat annealing process is coming closer to market with the launch of a the first beta of the tool since Shadowman acquired Gluster last October. There was nothing wrong with having a product called GlusterFS – that name suggested it was part …
There I've said it but it had to be said.
Re: Gluster fsck
Yeah, I had it in the subhead. But then I changed my mind to let you guys finish the joke.
FYI, the "Fuse" client isn't a native client per se. Fuse is mechanism for letting a user-space (e.g. non-kernel) program appear to be a filesystem and as such, incurs more overhead than something like NFS which is a kernel module.
Gluster's weak point, which is not mentioned, is that a file cannot span server nodes. If you have 50G left on one and 50G left on another and need to write/copy/save 51G of data, you're SOL.
Last but not least, "Shadowman"? What dark and odorous orifice did you pull that one from?
Shadowman is the trademarked name of the Red Hat logo.
This looks like a name clash, as there's no way they'd be using Filesystems in Userspace if performance was what they're after.
What you say is not true, googoobaby. There is a stripe translator, not enabled by default but only a CLI command away, that will stripe across multiple bricks (which can be on multiple servers).
Your choice with Gluster is either to use a server node with an NFS client or via the FUSE client. NFS is more efficient in the sense that the client is better plumbed in to the system, yet even with FUSE overhead, the other is a cleaner mapping from Gluster to local filesystem. As the article itself states, the FUSE client has "some performance benefits". I used it mostly because it was less of a pain than the NFS functionality.
What of GFS2?
It really is a cluster fsck.
Needs a lot of work
I created a Gluster Cluster last year, mirrored across 2 VM's to give high availability NFS - It "works", but its dog slow and memory usage after several days goes through the roof.
I've tried an IBM alternative (Used on a ScaleComputing storage cluster) and its natively quick, support NFS, CIFS etc.. if Redhat want to match this, then they have a lot of work to do! Unfortunately the IBM offering was just to expensive.