back to article Is hyperconvergence about to take over the enterprise data centre?

The glamour news in data centres for the past couple of years has been all about all-flash arrays, with converged systems providing a growing backdrop. The flash arrays provided hot performance while converged systems (CI) led by Dell EMC's CPSD (Converged Platforms & Solutions Division – formerly VCE) and Cisco/NetApp's …

  1. TechicallyConfused

    Weren't data centers populated by all in one compute, storage and network systems waaaay back in the 70's and 80's? I'm sure they were. . .now what was the term used for those systems . . . hmmm!

    The way I see it, if you can afford to buy these HCI systems then you should also be able to afford the skills to properly manage and maintain them because no one with any sense is going to leave that to the vendor. If you can afford those skills then you will probably find it cheaper and more efficient to have build out your own environment specific to what you want. Unless your storage needs equally match your compute needs you are going to be over provisioning one or the other with these single SKU systems.

    MAINFRAMES. . . that was the term I was looking for...

    HCI is just a lazy way for IT to buy kit in more cases than not and in all cases a cheaper way for vendors to try and differentiate themselves - at least it was until they all came up with more or less the same offering.

    1. DougS Silver badge

      The wheel turns

      There's always a fight between centralization and standardization, and increased freedom to get what you need (or think you need) for a given task. That's how we went from mainframes down to PCs, but managing tons of PCs, departmental servers and so forth has proven to be a nightmare for most organizations, which only gets worse as people start bringing in their phone, so they want to centralize again.

      People can now get that "freedom" they need with their BYOD phones and tablets, so they care less about having their PC centrally managed, or basically a terminal tied to a VM running in another building.

  2. Fenton

    Time is money

    Every hour you spend building the infrastructure and calling vendor a) because the product from vendor b) or c) does not play ball you are not adding value to the business.

    I love driving fast cars around a track. I have also in the past built myself a good track car (because I enjoyed it), but all the time I spend building the car I can't drive it.

    In terms of compute and storage scaling there are now rack scale products out there where you don't need to scale compute and storage together, you can buy compute dense nodes and storage only nodes.

    Yes build you own can be cheaper from a capex point of view but not from an opex point of view.

  3. thondwe

    Vendor Lock in

    Buying Hardware and Software together is ultimate lock in (Apple!) at present (in theory) our hardware can support VMware, HyperV, OpenStack, ...) if your HCI provider goes belly up, or gets too expensive, or changes licencing model, there's little you can do bar spending a shed load to replace everything?

    It's the opposite way to Software Defined Network/Storage/etc. the other current marketing darling...

    1. Lost_Signal

      Re: Vendor Lock in

      If you buy VxRAIL you get standalone perpetual VSAN licenses, so if Dell/EMC decides to go crazy you can take your licensing elsewhere.

  4. Numen
    Holmes

    Forward into the past!

    We're seeing the re-invention of the divide and conquer approach: X was too big and too slow to provision, so we're going with smaller systems that are much more agile and that "anyone" can manage. Of course there are more of them, so maintenance time and effort is multiplied (1400 security patch applications, anyone?) and we need more people to do it. After a while, this gets to be a problem. Wait - look! We can consolidate all this little servers into a few big ones. Problem solved!

    There's a time-honored tradition of stampeding over to a "new" approach that solves your current issues, without any insight (or memory) that the new approach has its issues, too. Too hard to figure out how to solve your current issues, so just follow the PR/hype and go with something different.

    Fun to watch this on its second or third go-around.

  5. Boyan_StorPool

    When HCI is not a good fit

    As pointed out HCI is a very good fit for small and mid-sized deployments. There is hardly something better in small deployments, which most of all, need simple solutions, that "just work". As size increases and you are looking at larger private & public clouds it makes sense to still manage separate components - servers, network and storage. One reason is flexibility, the other is manageability (not a black-box, can solve bottlenecks in networking, separate from compute/storage). A third reason is cost: at a scale it is actually cheaper to go with standard components all the way. Lastly, this approach reduces vendor lock-in/dependency, for all components can be decoupled and different vendors can be used/changed.

    Cheers,

    Boyan @ StorPool

    www.storpool.com

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019