Reply to post: Re: too big, or too small

NetApp puts everything it's got into a hyperconverged box

JohnMartin

Re: too big, or too small

That the other HCI vendors are beginning to sell storage only nodes (I haven't seen them sell compute only nodes though) validates the architectural design NetApp has taken, and the consumption models around them and the rest of the menagerie of mix'n'mach node type seem to be a lot more complex than what's being launched with NetApp HCI, It's also worth noting that most (all) of these approaches require you to purchase additional VMware licenses for the storage nodes and tends to push up the licensing costs of Oracle and SQL server which tend to want to charge for the total number of cores in the whole vSphere cluster just because you might run Oracle on them one day (its dumb, but it happens).

QoS that actually works and is easy to use / change with floor, max and burst is different than QoS that just does rate limiting and causes unpredictable latency spikes, plus a lot of people are still unwilling or unable to move to the latest version of vSphere.

Lastly, there's a bunch of other strengths ElementOS brings to the table in terms of performance, scalability, replication, D/R, failure resiliency, multi-tenancy, and the ability to both grow and shrink the capacity and performance of the storage pool non-disruptively.

Even so, there are going to be times when buying servers that have exactly the right ratio of compute to memory will make more sense than buying one of the three HCI compute nodes, but that's why there are also more traditional converged infrastructure offerings within the Data Fabric .. both approaches have their strengths, you just have to understand the tradeoffs in each architecture.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019