back to article NetApp could use Microsoft to beat off VMware's virtual tool

Let's think about putting some storage Lego bricks together in a new combination. The bricks are labelled NetApp, DataONTAP, ONTAP EDGE, VSA, VMware and Hyper-V. A VSA is a virtual storage appliance - with storage array controller software running as a virtual machine and turning the host server's local disks into a shared …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    Interesting...

    I've been doing a bit of research into W2012 recently (I work in backup/storage, so need to know it) and it's interesting that MS seem to be offering a lot of what would be classically considered "Array functionality" in the OS - such as replication, disk pools shared between servers, dedupe, online spares, mixing of RAID/Protection levels in pools, sharing of filesystems between servers. This is an interesting development as it will certainly put the wind up the medium sized array manufacturers - why would someone purchase functionality from them if it's available at the OS with jbod arrays?

    Also, from a backup point of view - if the filesystem can dedupe itself, why would you license the dedupe functionality in your backup product or array? Better still with 2012, you can dedupe files which aren't used and keep the ones which are used fully inflated.

    All pretty interesting...

    1. Lusty

      Re: Interesting...

      Take a look at how those W2012 features work and you'll realise why the storage vendors don't feel threatened. Anyone implementing them will sooner or later move to something that works well and performs well - the built in Windows stuff is designed in a way that prohibits this at the moment.

      1. Anonymous Coward
        Anonymous Coward

        Re: Interesting...

        Cant say I agree. Those features work just fine, are mostly performance neutral and are very easy to configure.

      2. Anonymous Coward
        Anonymous Coward

        Re: Interesting...

        @Lusty - Specifically, what problems do you see? I see an OS based replacement to array functionality for small companies. It's never going to replace a VMAX, but that's not what it's intended to do. It's like any OS based disk management system, it's never going to replace the big boys hardware, but in my opinion this has upped the level at which a small-to-medium company needs to get a serious array. You also end up with an option to host some things at the array and some at the OS level, very much like you would with Veritas Foundation Suite. Maybe you want to RAID in hardware, but you want to pool disks across various servers with the OS controlling it, for instance, now you can with Windows. Maybe you want a deduped filesystem, without shelling out license costs to the array manufacturer, now you can.

        1. Lusty

          Re: Interesting...

          For instance the massive performance hit of their dedupe implementation brought on by the various decisions they have made, especially when used on virtualised servers. The technology is very chatty with the disk, only dedupes closed files, only dedupes after 5 days by default, breaks block alignment by squashing data into the chunk store. Or the performance bottlenecks of ReFS being limited to writing to individual disks rather than accross multiple drives. There are other examples, but the gist is that for anyone but the smallest companies a proper SAN won't break the bank and will perform better for throughput, IO and latency. All of the major vendors have a SAN for under £5k these days, and a server with lots of disk will cost approximately the same money anyway.

          1. Lusty

            Re: Interesting...

            I'm not bashing Microsoft by the way, I like Server 2012 a lot. All I'm saying is benchmark before you proceed as a lot of the Server 2012 storage features assume sole access to the storage, and they assume that the storage is underutilised.

            1. Anonymous Coward
              Anonymous Coward

              Re: Interesting...

              That's the key - benchmark everything before use. Never use anything based on what someone says it will/won't do, always check it out for yourself.

              There are many cheapo SAN hosted arrays, but they tend to be fairly poor in basic features, with lots of stuff at extra cost, they also tend to limit you in one way or another, such as gige only or no redundant ports etc. Now, two machines with only a single iSCSI connection could be hooked up together with a bunch of Windows clients and the features of the OS can augment the features of the array. But, like you said - testing is the key.

              1. VirtuallyPresent
                Alert

                Re: Interesting...

                I think this is the age old discussion that will bring out valid opinions on both sides.

                There is a cost associated with either method, you will pay in some way regardless.

                It also depends on the size and segment of the industry. I really do not agree with the increase pricing theory of MS vs Linnux etc. With a large buy, you can negotiate price. All large OEM software and hardware vendors will drop their pants to get a large PO, so there is power in size of purchase. Any large customer will want support and reliability, five 9's comes at a cost. If you have the talent in house to operate and maintain open source systems, then that is great, but there is a cost associate with that as well.

This topic is closed for new posts.

Other stories you might like