back to article Server storage slips on robes, grabs scythe, stalks legacy SANs

In ten years, legacy enterprise storage-area networks (SANs), network-attached storage (NAS), and direct-attached storage (DAS) revenues will have lost 88 per cent of their present value, according to Wikibon research. Nearly 90 per cent of today's storage revenues will then be split between enterprise server SANs and …

  1. Mikel

    Lest we forget

    45 Drives.com

    Where you can get a DIY server storage node that puts 480 TB in 4U, and you can customize it all you like. Many of the virtual SANs above, like HP's VSA, run on it under Linux. The hardware actually costs less than HP's software in that case.

    1. Anonymous Coward
      Anonymous Coward

      Re: Lest we forget

      Please don't spam.

      And those storage servers are limited to 10GbE nics, which is pretty crap for large scale storage. :(

      1. Anonymous Coward
        Anonymous Coward

        Re: Lest we forget

        *whoooooosssshhhhh*

      2. Mikel

        Re: No spam

        I'm not affiliated with the companies, I just like the products.

        It's a box. You can put any hardware you want in the box. Got 100GbE CNAs? 12x EDR Infiniband? 128 GFC engineering samples magically appeared? If your software supports it knock yourself out.

        For that stuff tho you would want to populate with all SSD and immense RAM so we are talking about a different use profile.

        1. Justin Clift

          Re: No spam

          I'm not affiliated with the companies, I just like the products.

          It's a box. You can put any hardware you want in the box. Got 100GbE CNAs? 12x EDR Infiniband? 128 GFC engineering samples magically appeared? If your software supports it knock yourself out.

          Got it. Wasn't clear from their website, as even with their "Custom" builds they're only giving choices of 10GbE. Seemed very limited.

          1. Mikel
            Pint

            Re: No spam

            >Got it

            Cool.

            At the basic level it is just a bare bones computer case just like a bare bones PC Chassis, except for server motherboards and with room for 30, 45 or 60 drives. The open source design is from Backblaze (also no affiliation) who used it to hold their subscription based $5/month unlimited GB backup servers, and thought it would be neat to share the design.

            The OEM and store is actually the fab shop Backblaze ordered their kit fabbed from. Their mission isn't to sell you storage servers, it's to sell bespoke sheet metal fabricated assemblies. They aren't a storage vendor; they're a tin bender. But Backblaze told people they had it made there and authorized them to make it for anybody who asked and mod it as much as they liked - and people asked so here it is.

            Originally they just sold the metal case and mounting hardware, but people asked for more convenience so they offer now the configurations that Backblaze has vetted as cost effective, performant and reliable for their use - which is a low bandwidth use, typically writing once ever from the Internet and reading almost never. Hence only 10Gbps NIC. Netflix uses these for their Openconnect CDN servers, which they provide for free to any ISP who asks, as it helps reduce Internet backbone usage (and hence the fees Netflix pays).

            So it goes nicely on topic in an ElReg article about DIY Server NAS & SAN.

            You can read more about the design decisions, status, components and such on their blog. Definitely a worthy read for anyone approaching the Deep Storage Cheap question. The Backblaze engineers are brilliant and adventurous, but also conservative and data based. In addition to this design they also publish definitive performance failure metrics analysis for their systems which is the most thorough independent data published by anyone except Google. You aren't going toget that data out of HP or Dell. https://www.backblaze.com/blog/storage-pod-4-5-tweaking-a-proven-design/

  2. Hyperconvergenceisthewave

    You forgot a player

    I know they are relatively new to the space, but Gridstore should be considered because they are the only all flash Hyperconverged play that is 100% focused on the growing Hyper V market. A lot of organizations are looking for ways to reduce their VMWare spend and Gridstore is different from the other players. Erasure coding versus replicas for data protection, QoS on a per VM basis and full integration with System Center and Azure. The ability to scale compute independent of storage. If Microsoft ever invests in them or pays attention, watch out.

    Not intended as spam...I just think the interests of the Hyper V market are underserved...

    1. Trevor_Pott Gold badge

      Re: You forgot a player

      Gridstore's cool, but has a few problems

      1) Next to no sales. Who has ever seen a Gridstore in the wild? Half the storage analysts I talk to are convinced they're functionally a myth. I'm not entirely sure they're really more than trolling myself.

      2) Nutanix does Hyper-V. And they do it damned well. SimpliVity, Maxta and many, many others will be there very soon. (I expect by end of year for most of them.)

      3) Marketing. Gridstore's budget for marketing and community engagement appears to be the square root of negative fleventy. This goes back to "who has ever seen a Gridstore box in the wild?" These things aren't in front of the kinds of people who to talks at user groups or Spicecorps or what-have-you. Gridstore has virtually no mindshare amongst the technorati, so even people who know about it tend to forget when it comes crunch time and they have to choose a solution. This leads us to...

      4) Really terrible channel support. Gridstore may have a channel strategy. If so, I haven't been able to detect it. If they do have someone out there kicking the channel in the ASCII then those channel monkies aren't doing their job. (See: 3.) They aren't pushing Gridstore as a solution when customers come to call and this is hurting them.

      I can't comment much on price - I seem to recall vaguely that it was actually not bad - or functionality - the last time I saw a demo it seemed to do what was required in a reasonable enough fashion - but the fact that I can't summon that information immediately and it is essentially my job to know this stuff just reinforced how ineffective Gridstore has been at remaining "sticky" with mindshare.

      By all accounts Gridstore seems a good product, but the company that sells that product is about to get absolutely pwned by the fist of a dozen angry gods as they all turn their eyes from KVM to Hyper-V. Everyone has an ESXi hyperconverged solution. They're all finishing up with KVM/Openstack. Hyper-V is next. After that: Xen.

      Gridstore doesn't seem ready to go to war. They don't seem to even understand what is about to happen to them, let alone be remotely ready for it. Too bad, really. They seemed like nice folks.

      1. swhitworth

        Re: You forgot a player

        Hi Trevor!

        I see your point on ALL of these points. A few have improved significantly and others are in the works. I know you from working at a previous HyperConverged player, you were very helpful at Spiceworks last year in fact! :)

        I am raising these notes with the exec team internally. I recently joined Gridstore and dug into all of the things you mentioned above before doing so. Just in the few months I have been here there has been significant and positive change. I would love to talk to them more about getting the right people (you for example) engaged and being more open about the technology. Issues like Sales and Partner Programs are rapidly changing - and very positive.

        As always I appreciate your honesty and transparency, I will do some due diligence on my end and start to address the issues as quickly as possible.

        1. Trevor_Pott Gold badge

          Re: You forgot a player

          I am always available to my hyperconverged brethren when needed. Give me a ping and I'll prove what help I can.

  3. Highlevelthinker

    Server/SAN was always going to take over anyway I am surprised you give legacy SANs 10 years I would say 5-7. Legacy SAN will also survive in niche environments such as low latency envs where hypervisors kill performance. Server/SAN is limited to 10GbE NICs in the same way legacy SAN is and will evolve as SAN did in the past. Server/SAN will also halt the move for some to the cloud as IT is delivered at less cost within a business now than any cloud providers can supply it as on utility compute with far superior performance alongside less RISK in terms of who has your data. Vendors will adapt and move to selling Server/SAN as they sold legacy SAN in the past.

    1. Trevor_Pott Gold badge

      Oh there's some debate here. So Gartner (and some internal EMC projectsions) say that hyperconverged solutions will have 51% of the market by 2018. I disagree and think it's going to be 2020. The wikibon people seem to be somewhere in the middle.

      What nobody seems to understand when they do these calculations is that - with the exception of NetApp - array vendors will adapt. EMC is already doing so. Tintri is doing so. Others are slowly trying, at least, for change.

      With the exception of Nutanix, hyperconverged vendors are still in startup mode. They don't have the R&D capacity to really go toe to toe with someone like Dell. Array vendors will start to add value by acquiring new startups (like copy data management experts) and raising the bar for enterprise storage functionality. This will force hyperconverged players into a feature way they may well not win.

      The end result will be a thinning of the herd on both sides. I ultimately think that hyperconverged vendors will win, but I am expecting a rally by array vendors around the end of 2016 that will buy them a couple of years before arrays are finally reduced to a niche.

      The war is already over, but arrays will fight to the last man to keep their margins. And they'll ultimately lose.

  4. Yaron Haviv

    SAN/vSAN Centric, ignores reality

    This report is too SAN/VSAN Centric, while I agree the current SAN model will shrink, vSAN is not the main alternative.

    vSAN is used for co-located storage (VM images, app data, ..) and with new technologies like dedup, containers (Docker image is only 200MB vs 10GB vDisk), relative amount of server co-located storage and block storage will go down.

    The real hyper storage growth is in shared unstructured data, i.e. IoT, Video streams, Logs, BigData, .. such storage is not using any SAN/VSAN protocol, and cannot be co-located with app cluster (vSAN), simply since: a. any app in different compute cluster/region or even mobile device may want to access it, b. it grows in rate >100% per year and adding servers/cpu/mem for the sake of adding Petabytes is not so economical nor dense or power efficient enough.

    Its enough to look at the hyper-scale titans, which don’t grow their vSAN significantly as the post/report may imply, but rather grow exponentially and invest most of their energy in shared data-lakes and data services supporting object, scale-out NAS, and No-SQL models.

    If hyper-scale trends are an indication to where the Enterprise will go, Enterprise data-lakes, next gen object storage, and scale-out NAS will probably store more data than vSANs (hosting small Docker images and private app data).

    Yaron

    SDSBlog.com

  5. Magellan

    For Server SAN to displace traditional SAN, it must develop traditional SAN attributes

    I see four key attributes Server SAN software must develop for it to become the primary storage archetype for the data center:

    1. Server SAN software must support multiple hypervisors, and no hypervisor (i.e., Containers, Hadoop, Oracle RAC, etc.).

    2. Server SAN software must be or become flash aware (Write Amplification Factor = 1.0, etc.).

    3. Server SAN software must move to parity/erasure coding data protection and move away from RF2/FTT1 and RF3/FTT2.

    4. Server SAN software must support storage only nodes and compute only nodes for asymmetric scaling.

    1. SebastianT

      Re: For Server SAN to displace traditional SAN, it must develop traditional SAN attributes

      Nutanix actually does all of these things today. They recently added erasure coding, they support three hypervisors, they run Oracle RAC, and they have storage-only nodes They've been flash aware since day one, and at their conference they announced container support. Pretty far out ahead of the others I've seen in this space so far. Time will tell if they can keep the lead.

  6. Dave Hilling

    I hate predictions

    Tell me how tape is dead, and SANs will disappear. There will always be use cases for SAN just like TAPE and any other technology thats been declared dead multiple times. Like someone mentioned above if I need TBs/PBs of storage why would I want to add more cpu/ram etc just to get it. Sure in some scenarios server san will pick up but in many places it will probably be barely used at all.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon