back to article Welcome to the HYPER-converged bubble of HYPE! Enjoy it while it's here, storage folk

The hype around hyper-converged systems is huge. Are we experiencing a hyper-converged systems bubble? Companies in the field are growing like crazy, especially Nutanix and Simplivity. Nutanix scooped a billion-dollar-plus valuation earlier this year after a $140m funding round. Meanwhile: Startups like Maxta, NIMBOXX, and …

  1. Anonymous Coward
    Anonymous Coward

    Time to retire?

    Couldn't identify half of the acronyms on this, and am glad I didn't.

    http://dilbert.com/strips/comic/1997-04-27/

  2. markkulacz

    Consider that most of the time in many hyperconverged solutions, the data that is being accessed by a VM (or compute thread) isnt on the same physical server as the VM. Most IO will ultimately go over the interconnect. There is no relationship with which physical server a VM runs on, and the server that has all (or even some) of the data within the VSAN datastore. Isilon will attempt to co-locate, but since the object is distributed across 2-20 nodes (maybe even more), the odds that any specific IO to the "datastore" is actually to a disk on the local node is LOW. The SSD read cache is a L2 cache (on VSAN and Isilon.. coincidentally...), and this is on the disk group "under" the interconnect layer. Not that this is a bad thing - but technically speaking, once the cluster grows large enough the compute is still not on the same physical node as the storage. Even when the data is SSD read cached. The only way to resolve that is to either fully mirror data objects (and not allow a mirrored copy to span more than one node), and then to limit the compute threads that access that data to running on those nodes.

    The interconnect is critical to get right. We are seeing 1Gb not enough, now 10Gb not enough... so going up to 40Gb ? Isilon requires Infiniband, as it offers reliable transport. VSAN hacked in a "RDT" driver (dont worry.. you wont see it because it is hidden within the hypervisor-converged storage stack) to simulate reliable transport over commodity ethernet - my bet is that Infiniband support (maybe even preference) is in the future.

    I have nothing against hyperconvergence. Massive scale-out needs highly scalable ways to disperse computation across many nodes, even allowing for elasticity that stretches into the cloud. But the driver for hyperconvergence should be realistic about what it is and is not - and adopt the architecture when it makes sense. The "co-location of storage and compute" really is a myth.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like