Red Hat has a server operating system, middleware, virtualization, and a cloud fabric – and now it has production-grade, scale-out clustered network-attached storage now that it is shipping its Storage Server 2.0 software. The software is a gussied up version of the GlusterFS file system that was spun out of a project at …
Not quite as rosy
GlusterFS is a great product based on a great idea. Just a shame that it has been designed without considering any concept of fencing and split-brain prevention/management. That makes it fundamentally unusable in a safe fashion for a lot of tasks.
Last time I checked, GlusterFS was fuse based. How slow do you want to go? Linux himself has had some choice words on the suitability of fuse for serious file systems:
from the glusterfs website
Glusterfs is a distributed cluster filesystem and the latency introduced by FUSE context switching is negligible compared to the latency introduced by the network.
But if I only want a moderate level of disk and clustering this may be just the thing, especially when I can use glusterfs without having to use RHEL.
Having used it extensively) and having contributed GlusterFS patches to make it work as the rootfs for the Open Shared Root project:
I can tell you that the performance isn't as good as NFS when used for the same purpose, the network latency being the same in both cases.
Similar discrepancy is apparent when it is used for things like /home.
I'm sure somebody will claim (without fresh comparable benchmark figures) that the performance situation is substantially different than it was back in 2010, but for reference, you might want to look at the figures I produced back then:
Specifically for an idea of how much difference being in userspace it makes on the server side you can compare the performance of knfsd vs unfsd (more than double). Then look at the nosedive in performance when you use GlusterFS on both sides of the equation.
GlusterFS is great for large streaming accesses, but if you have an IOPS sensitive load (and virtually all loads are IOPS sensitive), the performance is going to suck pretty badly.
As a Storage Admin Red Hat is one of my least favorite OS to deal with. Plus everyone I know who has dealt with XFS on Linux has tersely advised opensolaris instead. I just don't see using Red Hat as the front end for storage.
RH works very well indeed with things like ZFS.
Nice step, but
there are still some things to work with: