back to article The case for storage virtualisation

‘Virtualisation’ is acknowledged by IT professionals as being a pretty good thing. But when we dig a bit deeper into the results of our surveys, it very quickly becomes clear that much of the recognition is centred on x86 server virtualisation technologies and solutions. Other areas, notably desktop and storage virtualisation …

COMMENTS

This topic is closed for new posts.
  1. GaileF0rce

    Isn't this SAN by another name

    I'm sure I'm missing something, and I probably am, but isn't Virtual Storage just another name for a SAN? Can someone explain? (without patronising me too much in case I'm missing the obvious)

  2. Sam Paton
    Badgers

    Basically

    The basic difference though is that the target you are mapped to is virtual and not an actual SCSI address on a physical array. It basically turns all your targets into a lookup table so you can move that LUN between arrays or raid groups without reconfiguring the server.

    You may have a bit of security to do on the array depending on the way you do it but it does make things very easy to move around.

    Make sense?

    1. GaileF0rce

      Makes Perfect sense

      Thanks Guys. A lot clearer now.

  3. Fazal Majid

    Depending on your definition

    Every single hard drive uses some form of logical block addressing, where a hard drive will pretend it has an implausible number of platters and heads to work around limitations in PC software. That is a form of virtualization.

    Similarly, RAID arrays export a view of a bunch of drives, interconnects, RAM, NVRAM and SSD cache as a logical (i.e. virtual) SCSI unit. That is another form of virtualization, on a larger scale.

    The Register article refers to a yet larger stage, where collections of arrays on a SAN are aggregated into a virtual LUN, i.e. three nested levels of virtualization.

    You have to have a tremendous amount of old legacy hardware to repurpose to make the ROI case for expensive and poorly standardized SAN virtualization gear, or a spectacularly incompetent and inefficient sys admin group who is unable to manage the existing LUNs using OS mechanisms like volume managers or advanced filesystems like VxFS or ZFS to do this virtualization at the OS level. Large scale and incompetence is far from uncommon in the Fortune 500, but it is a limited market to build a successful business on.

  4. Crazy Operations Guy
    Happy

    An HDD-less Datacenter

    I've been running a 5,000 (virtual) machine DC for a while where none of the servers have their own storage. All the physical machines run a a small hyper-visor pulled down from a boot server and all the VMs are actually on iSCSI shares on this huge 2-petabyte file server we have. SWince all the machine use Live-migration and the NAS has mass-redundancy, I only have to go to work like once a week to swap HDs in the NAS (the rest of the hardware is handled by the OEM, so I just queue up a bunch of tickets and I just meet them there to point out the servers, then head down to the pub together)

    The application side is even easier, everything is clustered, so if one half breaks, we have a script that will restore it from a known working snapshot and spend a few minutes catching up from a working system in the cluster. Patches are handled automatically (Only one of the machines in the cluster gets the patch to ensure they work before being installed on all the others, clustering also allows for individual systems to be rebooted without downtime.

    Desktops are all thin-clients running with VDI in the datacenter, so any error is corrected by a rollback to a working snapshot since all the apps are using app-virtualization with all user files on a file-server, so nothing the user sees is affected by rolling back the snapshots.

    So while the company has about 15,000 users, the IT department is made up of 2 Helldesk people (Although most of this is handled by some scripts attached to a website, so they have it easy as well), 2 ops people (me and the networking guy), 5 devs (Web and app) and 2 suits.

  5. Anonymous Coward
    Anonymous Coward

    Mix and match

    You can define different classes of QoS (quality of storage) and "hide" real devices behind virtualization device. I've used Sanrad V-3000 a couple years ago to aggregate multiple Nexsan and Sun storage boxes as a backend for enterprise storage (Oracle DB and file services.)

  6. Trevor Pott o_O Gold badge

    Too expensive!

    A regular reader of the comments section, (especially the freeform dynamics articles,) is probably aware that I am a fan of x86 virtualisation. I am one of two network administrators at work, and we are entirely dependant on x86 virtualisation. We area a smallish company (60 people), but we run 75 virtualised servers and 50 virtualised desktops on top of 14 active servers. (We aren’t an IT company or web company. We print pictures.)

    Now, this is all of it whiteboxed kit. We work with a distributor that allows us a fair amount of leeway, and this is the only way we have been able to accomplish this on our budget. We poke through their pricelist, design a system to our specifications, and they will send it as a built system, warranteed by the distributor for 3 years. This is playing at being a “real boy” on a small business budget.

    What it does not allow for is storage virtualisation. Like it or not, until recently, any form of storage virtualisation was just too damned expensive. A fibrechannel SAN would cost more than every piece of hardware on my network combined. A gig-e whitebox SAN would probably have been doable, but not even close to fast enough.

    When you start talking about trying to provide enough I/O capacity to run 125 VMs, you are either talking about multiple gig-e iSCSI servers, fibre channel, or getting into 10Gig-E. And there’s the ticket to this whole mess. 10Gig-E is expensive right now (by SME standards,) but it won’t be for long. My next server refresh is in 2012…and we will be going to a 10Gig-E iSCSI SAN. The x86 boxes that host the VMs will be nothing more than dumb boxes with a CPU, some RAM and a pair of NICs.

    So where does this leave me? Local storage. It sucks, and is the absolute bane of my existence. When an ESXi box pukes its guts all over the floor, (bad DIMM, dead disk, you name it,) it’s “drag the VMs off the ESXi box, upload them to the spare, and hope the users don’t gripe too much at the downtime.” (ESXi 4 has a horrific console I/O limitation that says “thou shalt not sling VMs faster than 20MB/sec which makes this even more terrible than it really ought to be. Solvable by creating a small iSCSI box and using it as a “bounce server.” The ridiculous 20MB/sec limitation doesn’t apply to mounted LUNs)

    The long and the short of it is that block-level storage virtualisation is still a big boy’s game. Real budgets are needed. SMEs can play in this space via appliances, but not if they need anything close to decent throughput or enough IOPS to run more than a very small number of VMs. But not for long…

    As to storage virtualisation for file storage…I’ve yet to see a business case. If the data being stored is just the company’s collection of excel spreadsheets, PDFs, word documents, etc...why even bother with storage virtualisation? It should all be live-replicated to a backup server to begin with and regularly backed up. I am open to thoughts and reasons as to how storage virtualisation could make life “better” here, I’ve just never seen a case made for it.

This topic is closed for new posts.

Other stories you might like