Most large companies are running at least two virtualisation platforms
That's not my experience, where companies typically standardise on one x86 virtualisation platform. Perhaps I've led a sheltered life.
In the opinion of this blogger, VMware needs to open-source ESXi and move on. By open-sourcing ESXi, it could start to concentrate on becoming the dominant player in the future delivery of the third platform. If the EMC subsidiary continues with the current development model with ESXi, its interactions with the OpenStack …
Where I'm at, we were just VMWare, but Oracle prefers to be running on OracleVM, and there is about $300K/yr to save in licensing and support costs by the switch, so over time VMWare is probably about to slowly get pushed out. Starting with the Oracle DB/App environments (tired of fighing with their support regarding running under VMWare), and then potentially spreading out from there. No telling if KVM and HyperV won't both also get picked up along the way...
I ran into that too at a previous job, Oracle Linux on OracleVM results in cost savings that are hard to ignore. Couple that with Windows Datacenter licensing and ECI including System Center and the advancements in Hyper-V and I'm starting to see a number of companies start splitting their virtualization infrastructure into separate silos to save money.
I have some Xen installations that came about for the same reason (VDI deployments)
While Xen has it's own quirks, it's a very stable system at a very good (relative) price. While it is true that it doesn't have the breadth of 3rd party support, what does exist seems to be well built and well priced.
In short, I have moved some workloads to Xen and have been happy about it.
I could also tell similar stories about HyperV.
In short, it doesn't surprise me to hear that most people have multiple VM environments.
I would also speculate that VMWare has a pricing justification battle looming on the near horizon, if it isn't here already.
While I don't know if ESXi truly needs to go open source - I can envision a near-future where they will be forced into becoming far more competitive.
When VMware started, they were the only game in town. With VMware ESxi 6, they have moved further away from VSphere client and more to web client. To avoid Vsphere client, they took the crappy flash app (not web app) which they call web client and made it an exe. Web app now takes 2-3 times as many clicks and much more mouse (therefore hand) movement to accomplish even simple tasks than vCenter does. Keyboard support is not well designed and not well planned.
ESXi is ok with PowerCLI, but they lack any decent auto completion. Their help is hopeless. Their documentation has become unsearchable.
System Center has come A LONG WAY. Citrix Xen is nice. OpenStack is now usable.
I recently priced VMware for my data center and VMware wanted $7500 per blade for ESXi, Virtual SAN and NSX. It's limitations of storage per blade was terrible too.
What's worse is that each blade cost $5000.
I don't see VMware even attempting to compete with other vendors. Sure they do more... But the price is at least 3 times more than it's worth.
"I recently priced VMware for my data center and VMware wanted $7500 per blade for ESXi, Virtual SAN and NSX. It's limitations of storage per blade was terrible too."
Surely hypervisors in this configuration aren't suited to blades - the idea is a large over-specced server that runs many VMs and has room to grow and move VMs around. Hence your £40k server replaces 20 £5k servers.
Smaller, fit for purpose blades are designed to pack a standard servers computing power into a very small space with a shared host - a bit like the physical version of a VM with no dynamic sharing?
There are definite benefits to using multiple blades as hypervisors over a single large physical server as a hypervisor. For example, if you are running multiple blades, a single blade failure is not catastrophic and can be dealt with very easily (i.e. vMotion). You can pretty easily scale up or down, you can shut down blades that are not being used at a given time, and you can probably pack more raw processing power or memory into the same space. I'd rather lose 1 out of 20 blades at a given moment than for a motherboard failure in a single server to knock out every virtual.
No one would be running a single host as a hypervisor, they would be running them as a cluster or else you wouldn't be using the paid versions of VMware, you'd stick with the free ones or the essentials version for a few hundred $.
The point about the pricing is that large hosts with lots of VMs and a good SAN are where the higher end VMware starts working out a good deal.
1 x large host versus 20+ smaller hosts. As blades themselves are smaller hosts the pricing doesn't work out. Therefore the TCO/ROI can be realised. On blades you are paying for just having the advantages of a portable VM and server separation. However assuming all the blades are identical and all the data is stored on a SAN anyway then the redundancy afforded by VMotion does actually provide so many benefits - you can just link a redundant blade to the same LUN that the dead blade was on and off you go.
I had a conversation with my former boss who has been using OpenStack for almost a year now (he is building a platform based on it for his new company's technology stack for customers). I quote him "Openstack sucks, but I know it". Something along those lines anyway. Openstack is not ready for prime time unless you have significant internal resources to keep it going or have a support partner like HP to help. A quote from HP "The easiest thing about Openstack is installing it, most organizations spend the bulk of their time simply keeping it running". You're in an endless upgrade cycle because the support cycles are so short both with community and with the enterprise editions from the likes of HP and Red Hat.
The storage on Openstack is even worse from what I'm told, a lot of organizations are finding that out the hard way and deploying enterprise storage behind it. The networking in Openstack sucks too, again quoting HP "after about 50 nodes neutron falls apart".
You can certainly make it work, I am told ebay has something like 200 engineers working on openstack there, most organizations don't have the time/money/resources to devote anything remotely resembling that to running stuff.
Openstack has a bright future, but it will be years(still!) until it's ready to be a product that you can install, support and be stable for 3-5 years without constant hacking/fixing/patching/upgrading.
A sign that it is getting stable will be when the likes of Red Hat and HP begin offering 3-5 year support agreements on Openstack, right now both are at 18 months. In HP's case the last 6 of those 18 months are dedicated to assisting migrating to the next version of Openstack which they emphasize is not trivial.
ESXi is not cheap to be sure (though the price you quote above seems pretty cheap perhaps that is not enterprise plus or maybe you have good bulk discount licenses). For me I compensate as much as I can by getting the most powerful processors I can to better utilize the license(which for us w/enterprise plus and 3 years production support comes to about $5k/socket without any fancy vmware addons none of which I need).
The value is still there for me, the stability and performance of vSphere and compatibility across operating systems (not likely many Linux shops are going to deploy Hyper-V). I had my first vSphere crash(PSOD) in 8 years of using it back in June (support believes it was a rare condition that was fixed by a firmware update released in May).
I don't understand the article's author though they seem to think that open source is the answer to everything when it is not. (I say that as a Linux user for 18 years now in personal and professional use, and have been a Linux user on the desktop for 16 of those years including right now).
If you want an open source hypervisor there are at least two of them out there. What was that? They suck? (Why else are you asking for VMware to opensource vsphere?) Well you get what you (don't) pay for.
Meanwhile my vSphere 4.1 clusters keep humming along running our ~550 or so Linux VMs.
I dunno we run almost all of ours on blades, we have 256GB in our semi-new blades and in the blades we are looking at for tech refresh I think they hold 768. Usually we run out of memory before anything else in our blade farm... but then gain for us 2x10Gb NICs are fine. I can see where some places having 2 or 4 may not be enough but here it is.
The fact that VMWare is so well established and enjoys significant reliability and scalability capabilities as well as a sizable and strong third party applications and support base is a plus if EMC open source ESXi and or even VSphere,
The only difference then is the cost of implementation compared to present pricing structure.
it is not like Xen or KVM which started out Open Source - which inherently means substantial and immature code base/changes, small community and weal professional support organizations - then tried to grow applications and support from scratch.
VMware and KVM are one of those examples that show the focus of both proprietary and open source software. Proprietary software has to keep adding new features to justify the price tag; people don't like to think they're paying for bug fixes. Meanwhile, open source software like KVM isn't driven by the upgrade treadmill, so they focus on doing the basics better than anyone else.
When you get down into the hypervisor level, KVM is far more manageable than ESXi. But you need other stuff on top of it to make it "enterprisey."
Much as I despise it, the vSphere web client is cross-platform. What sucks is it's Flash-based, slow and clunky. Will they be including it with ESXi 6 though or will I need vCenter to administer it?
I'm evaluating OpenStack at the moment and not 100% sold on it, but HP Helion looks good but they say the majority of their users run ESXi as the hypervisor for storage. That's what we'll be doing anyway. Also looking at vCloud Suite - which I like, tbh - but nobody at VMWare Sales seems to want to talk to me about it despite constant "Contact Sales" emails and leaving messages.
Wonder if the competition (HyperV/Azure and others) is scaring VMware a little?
First they've "rebranded" (always a bad sign!) some of their stuff with a stupid name "vRealize" - what's that about? So now have vSphere branded items, vCloud branded items, Horizon, vCenter... Everyone I talk virtualisation with just refers to "VMware" anyway (same as 3PAR/Lefthand instead of StorWhatever).
Anyway, simplification of bundles and lower pricing is needed to stop the encroaching competition. They are in danger becoming like Oracle (or Novell) - struggling to make money from established clients, but losing when trying to win new business?
If you are into mass scale VM deployments and the use of HiTech disk technologies, then you can still use KVM, but as a previous commentator said, it's not very "enterprisey". However, if you are running maybe 100 or so VMs, the low levels command and control and the efficiency of the technology means it is a more than satisfactory answer to the VM question.
One day we'll be able to just plop OpenStack on top as an enterprise upgrade, but today it is a non-trivial exercise to get it set up and doing the good stuff. A simple single server install is relatively easy, but doesn't give you the mobility and data redundancy features you were probably looking for in doing this in the first place.
You'll see Windows, Office and Oracle (all of them)... open source before ESXi. Just saying.
VMware provides an "ok" free version of ESXi that will suit many...
The future however does not belong to VMware. They will start to wane as KVM/Openstack takes its place (but that could be a few years off).
If you want something VMware like today... there's always Xen/Cloudstack.... and the free (open) version of Xen doesn't have restrictions like free ESXi.
Biting the hand that feeds IT © 1998–2019