In a software-defined data centre, why are some of the hottest properties hardware platforms? There are plenty of newly formed startups that will come to mind: highly converged, sometimes described as hyper-converged, servers. I think that it demonstrates what a mess our data centres have got into that products such as these …
What is your rant aimed at?
Not sure what you are complaining about exactly?
Re: What is your rant aimed at?
I'm not entirely sure either, there are some good points but it's almost as though someone's jotted down some bullet points and forgotten to write the article about them...
In any case, the additional cost of such 'converged systems' is normally a value add proposition rather than the cost of the hardware.
A Nutanix system costs more than the base SuperMicro hardware, it costs more than the equivalent HP SL servers, it also costs more than a Dell Poweredge C6100 and so on. The reason is not the hardware, it's the value-add that Nutanix offers. Customers who like that value add, are therefore likely to 'buy in' to the system and consequently pay more for that feature.
I mean, come on, how dare vendors try to add in unique selling points so that they can charge more for their product? Just who do they think they are?
Theres nothing to stop you running a software-define datacenter based on commodity hardware, it just leaves you with more gaps to fill. Whether that headache is worth the saving depends on you, your budget, your skills and your management's expectations. People do like the assurance that purchasing an appliance gives. Aside from a pretty good chance that it will do what it says on the tin, it gives you somewhere else to point if it doesn't work.
Re: What is your rant aimed at?
Sounds remarkably like a rant I've been levelling at Nutanix for quite some time. Their product is hardware boxes despite the fact it's all off the shelf gear and it's the software doing all the work - but can you buy just a software licence? Nope. The hardware is overpriced being the issue - you can't see the real costs of the software and frankly you'd rather just buy the software given the chance.
Edit: To be fair it looks like you can actually buy Nutanix as software now which renders their specific involvement in this story moot but they as-was were doing exactly the kind of thing I imagine the writer is talking about.
Nonsense, he's complaining about nonsense.
Companies like nonsense, and better yet, they love paying for it.
Last nonsense I found is upgrading a database spending an inordinate amount of money to solve the wrong problem by the wrong means caused by people too afraid they may lose their jobs if they pay attention to the IT department and change their practices slightly.
"We're the finance department, the lifeblood of this company, you do as we say".
"Surely the value has to be in the software: so have we got so bad at building our data centres that it makes sense to pay a premium for a hardware platform?"
Wish granted. Meet Maxta. Hyper-converged infrastructure as a software-only layer.
Nutanix and SimpliVity simply remove the burden of haggling with vendors, verifying against an HCL and waiting for the HCL to catch up with new hardware. For that, they charge a price.
Is that worth money? IIRC, cher Storage bod, you are the fellow who talks about EMC, Netapp, HP and so forth like it's a good thing...and what are they doing except packaging up hardware and software and selling it as an appliance? I can buy Supermicro fully redundant arrays and add my own software. From Windows Server to Nexenta to what-have-you it's absolutely no different than turning to EMC/Netapp/HP/etc. Except that I'd be rolling and configuring my own instead of pushing an RFQ to my vendor pool.
So, do you build from Supermicro, Huawei, Quanta and so forth? Controlling every level of the stack intimately? Or do you use RFQs and your pet vendors? Why? What value do you see in EMC over Supermicro?
Now, how is that same value different from converged stacks like Nutanix or SimpliVity?
The future is here. The storage admin is non-requisite. If you want to employ one, then by all means make them work for their money and make sure they control capital costs by tightly integrating everything. However, you can just punt them out on their arse now and buy converged infrastructure, letting the virtualisation admin handle everything. Hey, with NSX they can even handle networking too.
Where's the value? In reducing headcount, or at least reassigning those same bodies to different tasks that could be producing ROI greater than the cost of having someone else do the integration for you. Same as it always was.
I agree mate, just don't understand why "storagebod" created this article.
BTW for "scalable hyper-converged", check out Seanodes
Re: @Trevor Potts
I've no idea. The cynic in me suspects that our dear storagebod finds Nutanix and SimpliVity to be simply outside his comfort zone. Which, in IT is exactly when it should be investigated. It's the stuff that lies outside our comfort zones that will define the future of computing.
Hey, I know great people at Maxta, Nutanix and SimpliVity. If Storagebod wants to meet and greet, I can absolutely arrange it. I think that a laying of hands needs to occur. Sitting down with this stuff and actually using it will, I believe, change the fellow's mind in a right hurry.
All of these companies offer right proper kit, damn solid and easy to use. It is not a toy. It's not a joke. It's not some attempt to grab margin from the stupid. Server SANs absolutely, without question are a critical element of the future of storage.
"Object storage verses Server SANs" will be the new "block storage versus NFS". With all the added fun that Server SANs are typically object stores in their own right! They will be the primary storage mechanism of the next 15 years, with disk arrays and filers replacing tape as an archival medium. (The age of MAID is upon us! Tremble in fear!)
That means that if you do storage for a living and you aren't learning about Object stores, ranging from Hadoop to Caringo as well as Server SANs ranging from Nutanix to Maxta then you're cutting your own throat. Your understanding of the intimate details of LUNs, cache tuning and the vagaries of the NetApp and EMC operating systems are about to mean sweet fuck all.
We've entered an era in which the storage systems don't come with nerd knobs, because we've got software smart enough to look after itself. Those people needing full-bore nerd-knobby access to everything will be in the extreme minority, and they can afford to hire a room full of PhDs to keep their precious snowflake ticking along.
The future belongs to Object projects like Swift, Server SANs like Maxta and whiteboxers like Supermicro shifting Openserver-like tin...and you can quote me on all of that. That's my prediction, and I'm sticking to it.
Why not run all your hardware on software-defined virtualized machines, that way you can eliminate the actual hardware ..
What exactly is your point Storagebod?
Funny how you write a blog post and then all of a sudden people make independent claims that are actually pretty much answered in the blog - saves a lot of writing (if this link is allowed that is): http://blog.millennia.it/2014/06/03/nutanix-defending-the-hardware-appliance-in-a-software-defined-world/
If the link I provided isn't allowed then just type Millennia Blog Nutanix into google and it'll likely be the top hit ;)
No clue what this article is about, so...
I love bacon.
Re: No clue what this article is about, so...
So do I - have an upvote
Create a hyper-converged array with what you have in your datacenter
Another solution to consider in the "software-only" camp is Atlantis USX. Recent in-depth article by Chris Mellor on USX http://www.theregister.co.uk/2014/05/29/atlantis_takes_on_vipr
USX installs on the hypervisor (VMware-only for now) and works with server RAM & flash as a performance pool, while aggregating any storage on the network (flash, SAN, filers) as a capacity pool. The server admin dials in the performance, protection level, and capacity via USX for the VM at the point of provisioning. Storage characteristics defined by software - like a Nimble hybrid array, a Pure AFA, or Nutanix hyper-converged. Allows you to run up to 5x more VMs on the existing storage in your datacenter.
I don't know what the point of this article is either so maybe we can make the comment section more useful than the article
My question - who cleans up guide dog mess?? I mean how would the blind person know?!?
we spend money on hardware
because that's what all the fancy market-promise software requires to actually perform.
All these MBA's "thinking outside the box" have totally forgotten that there IS a box that must be considered.
- Geek's Guide to Britain Kingston's aviation empire: From industry firsts to Airfix heroes
- Analysis Happy 2nd birthday, Windows 8 and Surface: Anatomy of a disaster
- Review Vulture trails claw across Lenovo's touchy N20p Chromebook
- Adobe spies on readers: EVERY DRM page turn leaked to base over SSL
- Analysis The future health of the internet comes down to ONE simple question…