Server virtualisation luvvies are looking askance at expensive storage arrays and saying: "Pah! Run the storage controller functions as a system app in a virtual server and use JBODs. That's the way to use commodity hardware." This is the approach of stealthy startup ZeRTO and also Xiotech. Move the array controller functions up …
What integration and hardware testing?
what allot of these vendors miss out on telling customers is that when you buy the storage array as one unit hardware and software integrated you get the benefit of a fully tested platform. EMC\NetApp\IBM etc will run their own disk controller and disk and drive susbsystem firmware and operationally test and verify this will their Array software. The value of this cannot be overstated when building a virtual infrastructure which is wholly dependent on the storage layer.
dont under estimate the value of fully integrated hardware\software solution which is supported end to end by a single supplier when things go wrong, you will get a solution or answer. For this reason I expected traditional storage vendors who offer hw/sw integrated platforms to continue to flourish in the enterprise.
What is new here ? The HP P4000 (formerly Lefthand) VSA appliance software does exactly what is described in this article : place the controller stack onto a VM... both running under VMWare ESX or MS Hyper-V.. And can use Internal disks, JBODs, as well as external arrays as its physical storage source..
A third way???
Isn''t there another option than either an external array or the main processor, i.e. namely a PCIe card?
Thanks for this great analysis of my last blog post. I can honestly say that I agree with almost all of your analysis, but would like to add two additional ‘layers’ that are at the core of the paradigm shift I see in the market.
The first layer of the new hardware / software paradigm shift, as I see it, is not about the location of the CPU, but is about the location of the services and management. This can be demonstrated by the Cisco Nexus virtual switch. It doesn’t try to save CPU cycles, to replace the physical switches (obviously Cisco wouldn't want that), or even to commoditize them, but instead the Nexus switch provides a better network framework that is completely aligned with the virtualized environment, supporting the flexibility and mobility of the ‘new’ virtual infrastructure, together with all the right interfaces to the physical switches. That’s the key. When you are hypervisor resident, you can support things like VM vMotion and storage vMotion, without requiring any reconfiguration or complex management tasks.
Physical storage systems have additional severe limitations when working with virtualized environments, several of them cannot be solved by just better integration with VMware. For example, check out VMware's own SIOC functionality. It guarantees responsiveness for different VMs, even if they are running on the same disk. This functionality cannot reside in the storage array (or storage virtualization appliance, even if it is a VM) since the array cannot differentiate different IOs from different VMs. This is the reason VMware provided SIOC to solve this problem for their customers. Talking to customers using virtualization, we recognize several additional problems that they struggle with on a daily basis and cannot be solved within the arrays.
The second layer is complexity, driven mostly by manageability. When talking to our design partners and prospects, I see a real pain developing in virtualized infrastructure around complexity, flexibility, and support for truly dynamic IT. Storage virtualization appliances, physical or virtual, do not solve this pain. These appliances are a step away from "hardware", or proprietary architectures, but they do not complete the paradigm shift brought by virtualization and clouds. They still operate within the same storage constructs as the physical arrays.
We are not trying to reinvent the wheel, and definitely not invent a new form of locomotion :) I leave that to scientists and people smarter than me. All I am saying is that the paradigm shift caused by virtualization is creating new storage pains that will require new solutions. At the end, the solution makes the difference, not the architecture. The IT world IS changing.
When I discussed similar topics more than a decade ago when I was neck deep in the development of information management systems and drinking the software koolaid I'd have told you that we should abstract and aggregate as much as possible and move the functionality from storage to an app server layer.
I have long since changed my tune as a result of spending time in the storage industry to understand how the two worlds intersect. An abstraction of the type you and others suggest unintuitively increases lock-in, complexity (and oftentimes decreases performance) in the pursuit of greater (perceived) flexibility and control.
It's what experts refer to as problem displacement. Problem displacement occurs when the boundaries of a problem are unclear, and action does not solve a problem, it merely shifts it to another medium/area and to other people. In this case shifting the problem out from the arrays/controllers into yet another collection of 3rd party applications.
I challenge anyone who makes the claim that the above is a better long-term strategy to put up or shut up. Show us in a real world environment, side-by-side with an existing system, that moving the functionality off the controllers and into an external app is superior over time. Until then, it's just a bunch of hot air.
problem displacement challenge
I'll give you an example...VMware doing volume management.
- Updated Zucker punched: Google gobbles Facebook-wooed Titan Aerospace
- Elon Musk's LEAKY THRUSTER gas stalls Space Station supply run
- Windows 8.1, which you probably haven't upgraded to yet, ALREADY OBSOLETE
- Mounties always get their man: Heartbleed 'hacker', 19, CUFFED
- Opportunity selfie: Martian winds have given the spunky ol' rover a spring cleaning