You'll have the Red Headed League on to you.
VMware is planning logical storage containers that do away with Logical UNits (LUNs) and NFS mount points - and could stifle storage developments outside a group of five suppliers. VMware's plans were disclosed at a VMworld 2011 presentation (VSP3205) and described by Wikibon analyst David Floyer. They have also been discussed …
Sony's abuse of there customers has forced me to boycott all Sony products and services. This sort of anti-customer behavior (seemingly) on the part of VMware will force me to boycott all VMware products and services as well, and advise all of my clients to do so likewise.
. . . I'm not sure I see your point. Are you talking about the new pricing model or about the subject of this article? If the latter, what VMware is doing is actually a good thing because it allows the VMware administrator to virtualize storage more effectively. When combined with SDRM and SIOC, it gives the VMware administrator much more control over what disk performance is available to VMs, and it gives the hypervisor more ability to auto-tune storage distribution.
If your issue is with the storage "cartel" calling the shots, I have some news for you: this sort of thing is already happening. If you think VMware has not already been working with the major storage vendors to bake in advanced functionality, I have a bridge I'd like to sell you. Not, again, that doing so is a bad thing. From a VMware administration perspective, it would actually be great to hand vSphere a bunch of disks and allocate them directly without having to dick about with LUNs, NFS mounts, etc.
It is not just server people who are failing to grok how much row power is flying around in an average x86 box, it is network people too.
In any case, while there is a method into the "move it to another VM or the Hypervisor" madness, there is a flaw in this particular Vmware idea.
If the intelligence is moved to the hypervisor (or a dedicated VM) which runs a virtual storage controller the dedupe scope will be limited to what is seen by this particular hypervisor. So instead of getting dedupe across a whole rack you will be getting dedupe only across each of the servers in the rack and the VM images in it.
So all in all, it does not scale unless you make the controllers talk to each other.
Disclosure - EMCer here.
Chris - the VM Volume "advanced prototype" (shown in VSP3205 at VMworld) was a technology preview of this idea, and yeah, it's an important idea, and a disruptive idea.
Anyone who has managed a moderate to large deployment of virtualization knows that the "datastore" construct (on block or NAS storage) is not ideal - as then the properties of that datastore tend to be shared by ALL the things in it. It would be better if the level of granularity was a a VM, but WITHOUT the management scale problem. That's what was shown.
Today, the storage industry (and of course, I personally think that EMC does this more than anyone, and can prove it) are doing all sorts of things to be more integrated (vCenter plugins, making the arrays "aware of VM objects through bottom up vCenter API integration, VASA, VAAI, etc) - but unless something changes, we're stuck with this core problem - VMs are the target object, but LUNs and filesystems kind of "get in the way".
I'm sure that VMware will run it like all the storage programs they have run. The APIs are open, and available to all - but of course, the early work tends to focus on the technology partners supporting the largest number of customers.
More customers use EMC storage with VMware than any other type; and invests more resources and R&D (both by a longshot) - so it's no surprise that the demonstration in the session featured EMC storage so prominently. Pulling something like that is NOT easy, and a lot of people put in a lot of work into it.
For what it's worth - VMware is simply CHANGING what is important to customers and valuable from storage. Certain data services are moving up (policy-driven placement of VMs), certain ones are pushing down (offload of core data movement), and "intelligent pool" models (auto-tiering, dedupe) become more valuable as they map to simpler policy-driven storage use models.
While this was just a technology preview - if it comes to pass - vendors who are able to deliver strong VM Volume implementations, with VM-level policy and automation will become even more valuable.
Just my 2 cents.
It turns out HP was involved. So I think the title of the article needs to be updated. This story originally surfaced on Wikibon, and they have corrected their article...
You'll notice that "Hewlet Packard" has now been added to the list of vendors..
What I don't understand either in this article or in the wikibon - that VMware developing a new way to handle storage is seen as "stifling innovation". It strikes me that VMware using its position in the industry & relationship with the storage vendors to drive innovation...
No VMWare is choosing winners and losers rather than coming up with an open API. There are superior products out there than what the bigboys are producing. They are freezing out the people that really do the innovative stuff which is the little guy. It opens the door to other folks like Xen and even...Microsoft.
VMWare is making a mistake.
"It opens the door to other folks like Xen and even...Microsoft.
VMWare is making a mistake."
EXACTLY though one could argue that it's not a mistake but the good ol' proprietary EMC approach... it failed back then, it will fail again and will cost dearly for VMware but help Citrix and MIcrosoft.
Till mid 90-ies the storage world for other platforms than mainframe was locked-in, each server vendor supported his own storage. Data General opened this market with the CLARiiON. EMC marketing philosophy was always based on lock-in by features, for example PowerPath was originally created by Conley Corporation was develop as open path management however EMC after acquiring Conley limited the use for Symmetrix only, similar was with the first virtual tape CopyCross which was developed by EMC engineers to support any storage (in fact the initial beta site was on IBM ESS subsystem in Technion Haifa) but later limited to support Symmetrix only.
Is EMC trying to create new lock-in with the help of VMware? It may result in Pyrrhic victory; There are alternatives to x86 virtualization hypervisor (hyper-V, Citrix Xen and Red Hat KVM). If I had today to evaluate virtualization platform for the future I would select open approach.
I’d like to address the statement about HDS being left out of the next-generation API supplier group. HDS is an Elite level partner in the VMware Technology Alliance Partner program and as such, we work very closely with VMware on future technology development. In fact, HDS was the first company to fully certify virtualized storage with VMware VAAI earlier this year.
HDS did not participate in the demos shown during the session VSP3205, titled “Tech Preview: vStorage APIs for VM and Application Granular Data Management” during VMworld because HDS does not publicly demonstrate technology based upon pre-GA code of ours or our partners. The demos shown during this session were prototypes based upon VMware code that will not be released for at least a year or possibly until the next version of VMware vSphere 6. VMware made a caveat that the vendors who participated in the demonstration have not even committed to supporting the APIs.
Our decision not to participate in the demo during the session doesn’t mean we won’t support future VMware APIs; in fact, the opposite is true. Hitachi Data Systems and VMware are engaged at all levels of interoperability testing and certifications, support and engineering to assure timely and broad qualification and certifications for our mutual customers. HDS will continue to release VMware integrated solutions based upon mature VMware technology in line with VMware general availability releases of technology.
Disclosure: VP of Alliances, Hitachi Data Systems
Biting the hand that feeds IT © 1998–2019