VMware's vSphere Storage Appliance (VSA) has restrictions. It is a virtual NFS filer only, not a block-level storage facility as VMWare implied, and it is not accessible by apps. Our understanding, based on VMware's announcement material, was that the vSphere Storage Appliance (VSA) pools the direct-attached storage in 2 to 3 …
And I thought Virtualisation was supposed to simplify things. Looks like the Acronym Police need to be called over there.
Other VSA options...
Why would anyone choose this over HP's P4000 VSA, which has a list price of around $5000, and does support iSCSI? Plus has a bunch of additional features not present in any respect in the VMware VSA?
Unless VMware expect customers to pay a premium for being able to manage the VSA within vCenter (?)...
I would also expect SMBs to consider OpenFiler with a support package, which is even better value, and has even more features too...
So what can't you do with VMware's VSA?
I think it is a fair comment to say that its a step in the right direction for VMware in introducing a VSA but it looks like they have taken quite a large step of of the path. Their VSA has alot of limitations for $6k a server especially when you can only utilize 25% of the Server Storage and you still need a 3rd server as the Neutral Storage Host for vCentre etc.
I think considering there are a fair amount of Certified and non-certified solutions out there for VMware, end-users would have expected more bang for the buck.
Check out our VSA (StorMagic SvSAN) as an example of that althought only supporting 4.0 and 4.1 it has few limitations, currently undergoing VMware certification for availabilityy with vSphere 5 release.
Steve - StorMagic
Source of confusion?
I suspect the source of the block/file confusion is that ESX can use NFS as though it is block storage. Some people use this as an alternative to iSCSI. Oracle does a similar thing for database storage over NFS. Quite popular with NetApp users.
Yes, here you have it.
To me, as someone who has used a lot of NFS on NetAPPs as the store for VMs, this announcement from VMware was quite clear.
The product doesn't quite stack up against other NFSy solutions, but the intent was clear enough.
Store the VMs on here instead of faffing with local (on-vmhost) storage or iSCSI; let the VMs keep believing that they are real machines that keep all their own data on their own virtual hard drive. Fantastic for virtualising entire Functional Test environments at once.
The title is required, and must contain letters and/or digits.
I can see the use for this: most of our VMs are stored on an Equalogic SAN, but our host machines all have at least 100Gb of local storage, and it would be useful to be able to utilise this, whilst still being able to migrate machines between hosts.
However, at $6000 (so presumably at least £5000) it's way out of our price range.
I think we'll be sticking to just using local storage for the less important VMs that need the space (eg the WSUS server).
Expensive use of wasted resources
Any VMware host with local storage nowadays is wasting a few hundred gig of disk space, since the OS uses only a fraction of what even the smallest hard disks now present.
This is simply a way of making use of that storage but turning it into a resilent, replicated datastore, that clusters can share. No messing round with iSCSI or FC LUNs or NFS or whatever.
Damn expensive though if that pricing is correct..?!
Waaay too expensive!!!
I thought the VSA might be a few hundred bucks, but $5000 per server?!? It would be cheaper to buy a simple shared storage SAN (something like Coraid, which is what I'm running) - at least that way you have more flexibility in what you want to do with the storage rather than being limited by VSA.
The idea is sound, the pricing is ridiculous!!
Well Check this out
Well Take a look at this, a direct FAQ comparison between VMware VSA and StorMagic.