back to article Enterprise storage will die just like tape did, say chaps with graphs

The Wikibon biz tech consultancy thinks that the era of networked filers and SANs is going to end with server SANs replacing them. SANs and filers will have largely died away by 2027, it reckons. The chart below shows the results from Wikibon's Server SAN Research Project 2014. Between 2012 and 2014 the storage market will …


This topic is closed for new posts.
  1. Duncan Macdonald Silver badge

    Partly stating the obvious - SANs are I/O bound

    A single high end flash drive can use most of the capacity of a 10GbE link. Almost all SANs are horribly I/O bound. (A small array of 100 SSDs can have a raw I/O capability of over 50GBytes/sec (800Gbits/sec) but is unlikely to have more than 2 40GbE links giving only 5GBytes/sec - the problem gets worse on larger arrays.)

    Directly attached storage kicks the sh*t out of SANs for speed and latency. The advantages of a SAN are reduction in storage requirements (due to deduplication) and centralised backup. However these advantages no longer outweigh the costs (lower server performance and the high cost of the SAN hardware)

    1. P. Lee Silver badge

      Re: Partly stating the obvious - SANs are I/O bound


      Over-enthusiastic pricing by vendors is partly to blame. It costs so much you have to spread the cost between machines. Ironically, it costs so much because you've tried to cater for so many VMs, loading on management features.

    2. Dave Hilling

      Re: Partly stating the obvious - SANs are I/O bound

      I get over 5GB/s max with an average of about 1ms latency on reads right now from an all flash array. Writes are a bit slower with an average of about 5ms ..... I can live with that considering its only going to get better over time.

    3. Anonymous Coward
      Anonymous Coward

      Re: Partly stating the obvious - SANs are I/O bound

      While you're not wrong, server SAN doesn't even remotely address the problem. SAN storage isn't IO bound on the path to the server, it's the back end SAS loop where the issue lies which currently runs at 6Gbps if you're lucky. Connecting to the SAN can be done with IB or 40GbE if you genuinely can drive the bandwidth, but a system with dual 40GbE adapters in will only have room for a few SAS ports for disk loops. The SAN has only so many PCI lanes to add extra SAS ports, and so this limits the IO.

      Server SAN you may think would help this, but in reality those servers must also have networking components taking up PCI lanes and so they won't have all that much extra bandwidth. Consider also that present technology would require the in server storage to be replicated out to another box for resilience, a massive downside on the P4000 and one which causes higher latency, not lower in a properly configured solution.

      I hate to say it but Violin are the only ones I've seen do anything useful to address this. All of their internal interconnect tech is patented though and nobody likes dealing with the mess that is Violin at the moment so this isn't really a solution.

      1. bitpushr

        Re: Partly stating the obvious - SANs are I/O bound

        Disclaimer: I am a NetApp employee.

        I can't speak for other vendors, but NetApp uses stacks of SAS - not loops. While the speed of SAS is 6Gbit/sec., it is important to remember that this is per lane. Our copper SAS cables feature four independent lanes which are automatically load-balanced, meaning we get 6*4=24Gbit/sec. bandwidth per SAS cable.

        1. Anonymous Coward
          Anonymous Coward

          Re: Partly stating the obvious - SANs are I/O bound

          So you're saying that on the standard 4 port SAS cards you're able to drive 96Gbps? And you can do this from all of the PCI slots in the chassis? I'm curious which Xeon chips NetApp have been using that have such huge expansion options. Some quick googling suggests that the FAS3240 has a Harpertown with PCIe gen2 40 lanes for a total of 160Gbps in the whole controller. Of course, a fair chunk of this will be used with the external connectivity at 10Gbps or 16Gbps too. Obviously the new FAS8xxx will use PCIe gen 3 and with most of the ports on board are a little less wasteful, but I still see problems with putting lots of SSD in there.

          There's nothing wrong with the architecture for disk, but SSD can drive all the bandwidth you have available so these things are starting to matter.

  2. Julian Bradfield

    just like tape did?

    We get articles here every few months pointing out that tape is still going strong, and showing no signs of dying out!

    1. Ole Juul

      Re: just like tape did?

      Reality is so inconvenient when you're trying to make a point.

    2. Flawless101

      Re: just like tape did?

      Hey! you can prove anything with facts!

  3. Anonymous Coward
    Anonymous Coward

    "The Mainframe is dead" <- Circa 1990

    "Tape is dead" <- Circa 2000

    "Disk is dead" <- Circa 2010

    I do wish spreadsheet monkeys would stop pronouncing upon things they have absolutely no clue about.

    1. h4rm0ny

      Just because you can still find some instances of something in use, doesn't mean that the market has survived as anything like a large business. Also, I don't think anyone sensible has said that disk is dead. Many have pointed out that we can see its end as a mainstream technology from where we are, though.

      1. Anonymous Coward
        Anonymous Coward

        Just because you can still find some instances of something in use, doesn't mean that the market has survived as anything like a large business.

        You're making the same mistake as my boss does. He tells us all tape is dying/dead, and can't seem to understand that there is more data being put onto tape now than ever before. He uses simple measures like tape manufacturers sales information, which shows they sold less tapes last year, than they did the year before. He never manages to account for technology advances, despite the fact that we upgrade and supply tape drives to our customers. He can't seem to relate that what used once to take 10 tapes now doesn't even fill one, when he looks at those sales figures. No the figures say less tapes were sold, so tape is dying... it's the same argument with tape speed... those who tell us all about how slow it is never manage to account for the advances in tape technology which current tape drives contain, which actually makes them faster than some disk drives.

        I live in hopes that someone in the industry will someday come out with the pronouncement that "Predicting that some particular technology is dying/dead, is dead".

        1. Anonymous Coward
          Anonymous Coward

          Just because you personally still use tape doesn't mean it's not dying on its arse. There are a couple of scenarios where tape is still considered useful:

          1. Where there isn't a second side to replicate disk backups to

          2. Where archival capacity requirements make disk prohibitively expensive

          3. Very very long term cold archival

          4. Where the admin has decided against updating his skills

          Services such as Azure and Amazon are beginning to make a second site unnecessary, and the cost of the meat bag handling the tapes is making archival move towards disk. Facebook are looking into SSD as a long term archival solution since the overall cost is lower when power, cooling and maintenance are taken into account. Very long term archival requires recycling tapes every 5 years as well as tape migration projects to new formats, and as a result disk is now becoming a cheaper option if only for simplicity. As for number 4 on the list, those people will gradually be exiting the profession over the next few years as businesses take back control of their IT from the IT department.

          There are certainly lots of businesses still relying on tape for backup, and it's still a great solution but generally speaking those who are implementing new backup systems are now looking at disk first.

          1. JLH

            "Very long term archival requires recycling tapes every 5 years as well as tape migration projects to new formats, "

            Magnetic disks don't last that long either.

            I have one array which is coming up for five years old - disks are regularly failing on it. Still getting didks under a support contract which is fine.

            I don't think you should expect ANY medium to last for many years (except maybe acid free paper).

            Remember thought hat LTO tapes guarantee being able to read from two (?? or more) generations back of tape drive, ie LTO5 will read LT05

            I've put my foot in it here - cue war stories of how old tapes are NOT readable.

            1. Lusty

              The difference is that disk doesn't need a meat bag to load and change tapes slowly over a period of months, the data on disks is all online which is where the cost savings come from. As long as power requirements don't outweigh that cost then disk is starting to be the winner until you get to the extremely large capacities of systems like the LHC.

              recyclling in this instance means rewriting to the same tape every 5 years rather than replacing the tape. If the data is valuable enough to archive this is a necessary step. Disk can do this online as part of normal maintenance.

          2. Anonymous Coward
            Anonymous Coward

            >Facebook are looking into SSD as a long term archival solution since the overall cost is lower when power, cooling and maintenance are taken into account.

            Really? SSDs cost less than a tape? Do you actually know how much power and cooling a tape requires when it is not being accessed? See if you can work it out.

            1. Anonymous Coward
              Anonymous Coward

              Do you have any idea how much the maintenance part costs with tape? tape drives need cleaning, tapes need rewriting regularly, people need to move tapes, old tapes need to be transposed onto new tapes when formats change. Facebook have done the maths and they know a thing or two about operating large systems on a budget.

          3. Anonymous Coward
            Anonymous Coward

            There are certainly lots of businesses still relying on tape for backup, and it's still a great solution but generally speaking those who are implementing new backup systems are now looking at disk first.

            I'll make sure to mention that to the next customer who comes to us looking to implement VTL technology.

            "You won't want to be buying that Mr Customer, some bloke on the internet says you don't need the ability to restore from tape anymore. He says you'll be far better off backing up your 200TB of data over your network link to Amazon, or buying a couple of nice expensive site to site links (yes I know, I'm ignoring the licensing costs for replication solutions with a capacity of 200TB) so you can replicate it all over to that other site you sold last year".

            This is what I love about the internet, there's always someone who knows how to do the things you do better... at vastly increased cost.

            1. Anonymous Coward
              Anonymous Coward

              You know what a VTL is? VIRTUAL tape library. Usually a bunch of disks pretending to be tapes.

              1. Anonymous Coward
                Anonymous Coward

                You know what a VTL is? VIRTUAL tape library. Usually a bunch of disks pretending to be tapes.

                Yeah thanks, all the best ones destage to tape.

                1. Anonymous Coward
                  Anonymous Coward

                  They allow you to destage to tape, yes. It's neither mandatory nor recommended, it's just an option. Replication to another VTL is generally a better option if you have two sites as it's considerably cheaper when you do the maths.

                  1. Anonymous Coward
                    Anonymous Coward

                    No it's defintely recommended.

                    Tell me how do the maths of powering disk systems work out for storing data for compliance reasons, you know them awkward documents you have to keep copies of for 7 years, but will never have to access. The ones that if they are ever required will no doubt be required to be analysed on an auditors systems (no doubt facilitated by sending them tapes or CD/DVDs), not on yours.

                    How about the maths of storing system backups for massively redundant systems, the get out of jail free cards which you are highly unlikely to ever have to play, but which you have to keep because regulatory compliance requires you to be able to restore your systems. How much do you think customers should keep paying to keep such datasets readily available.

                    I for example see business critical systems which have been running for 10+ years. How much does it cost to keep instantly available the weekly full backups, and the nightly differential backups, for a massively redundant system for 10 years?

                    How about all them install images, the ones you use to install your systems, which are then kept lying around on spinning rust never to be used again... what's the cost of maintaining power to a disk system for such images for say two years, so that you can have a copy of code which you'll never use again, available for instant use?

                    Replicated disk systems are great, but lots of data doesn't need to be on it.

  4. RonWheeler


    They cured that yet?

  5. Ian Ringrose

    PCI is the new network…..

    SANs provide lots of value in backup, snapshots etc, they just can’t move data fast enough between themselves and the servers.

    So put the SAN in the same box as the server and use a PCI bus to connect them together.

    Then expend the PCI bus outside of the box, so they just have to be in the same rack….

    1. Nick Dyer

      Re: PCI is the new network…..

      What you just described was a failed startup called Virtensys, purchased by Micron a few years ago...

      The major problem here is if a PCI-E bus fails in a server, the whole server bluescreens. So it was always a massive SPOF.

    2. Lusty

      Re: PCI is the new network…..

      Violin do this. You may not be able to afford one but it's certainly widely available

  6. Jim 59


    I checked out Wikibon and their website. I love their excitement, enthusiasm and openness. In the interests of balance however, it is worth noting that they have only 5 employees and appear to be really just a small internet tech forum, founded in 2007. This does not make them a world authority on tech trends.

    "server san" does not appear to be a thing outside of Wikibon (google), and their definition of it seems to be a renaming of "nearline storage" that people got very excited about in 2008.

    Why would a small company puff such a feint idea ? Maybe I lack the vision thing.

  7. Anonymous Coward
    Anonymous Coward

    IO is still remote

    "the protocol overheads of IO and fibre networks become a bottleneck for new applications"

    All of these "Server SANs" still use Ethernet/IB/FC/whatever to access the data.

    Take VSAN for example, at least 50% of the IO is going to be remote...writes are not guaranteed to be local, even with FTT=0 the VMDK can be on a different node, making 100% of IO remote. Reads are sourced from all of the nodes with data on them, so with FTT=1, even if one copy is local, 50% of the read IO is coming from remote.

    How is this different than a "traditional" storage array being accessed remotely?

    "Server SAN has a large number of potential benefits to application design, application operation, application performance and infrastructure cost. These come from an increased flexibility in how a storage is mapped to the applications."

    Can I get an example here? StoreVirtual uses iSCSI (so does everyone else), VSAN is VMDK only, Gluster does POSIX, and so on. There's nothing new here that adds "increased flexibility in how a storage is mapped to the applications".

    If you want to argue that the management software, whether a custom app or "software defined" because it uses an API for automation, enables flexibility, I at least see the point to be made.

    Maybe (probably?) I'm dumber than the average person, can someone spoon feed it to me?

  8. Anonymous Coward
    Anonymous Coward


    we´re currently looking for a VDI (~250 office uers, full Win7 VM) setup. We have a partner offering us a "3Par" based SAN for (said) best IOPS plus a mix of Vmware/Citrix VDI setup.

    The last days I read about Vmware´s VSAN for the first time, which sounds great for me. (no more end of life policies for SANs, just get another server running from any other HCLed vendor, too good to be true)

    Of course, we have licensing costs per cpu but as you know, "3Par" isn´t cheap at all.

    Should we go VSAN instead and simply take something like a dozen of SSDed/SASed DL380 G8 instead?

    What do you think?

    1. Anonymous Coward
      Anonymous Coward

      What are the desktop specs?

      First, this probably isn't the best place for questions like this.

      Second, I would make the sales teams work for their money. Approach VMware, 3Par, EMC, NetApp, Pure, whoever else you want, and tell them what you want to do. Make a list of features that you must have, things like "X GB desktop" and "Y/IOPS per desktop". If you don't know IOPS, at least give them the applications in use.

      That list of must-have features can also include things like ease of manageability (be careful, this is subjective), expandability (e.g. how hard is it to add capacity?), servicability/reliability (what happens when component X fails?), and recoverability (including HA and DR). Also, if you aren't tied to VMware View, check out Citrix offerings as well (

      You'll get back vastly different answers. I like to pick a baseline of things that I'm comfortable and familiar with to have them compare against that. The vendors will include the features that make their products unique, it's up to you whether they are worthwhile.

      And don't forget TCO. Initial CAPEX outlay is just a small part of the total cost of ownership.

    2. Lusty

      I certainly wouldn't go with 3Par for a 250 user implementation. Something like Purestorage, Equallogic SSD or Atlantis would be more in line with that size solution, and even those are stretching it a bit.

  9. Paul 77

    Strange how comments about an article not directly about tape, degenerate into a debate about it, just because its mentioned in the title :-)

    Anyway, still using LTO5 here. Why? Because the tapes are smaller and cheaper than equivalent disk drives. Thats important when you have to shift several of them several thousand miles.

  10. JohnMartin

    The vast majority of SANs are not limited by network performance

    - DIsclosure NetApp Employee Opinions Are My Own -

    1. The network hasn't been a bottleneck to storage for a very long time, unless a moron designed your SAN, or you are really cheap about your SAN switch gear.

    2. most SAN's I've checked use about 5% of the available bandwidth. Even for what are considered "enterprise class disk arrays" 100,000 IOPS at 1ms response time and 8K blocks and you're looking at approximately 781 MiB/sec. A single 10GB ethernet connection can handle that with ease. All Flash arrays with "1 meeeelion" IOPS might require around 8 of them.

    2. The additional latency incurred by a modern network is currently measured in microseconds, this does make a difference in high frequency trading and some fraud detection systems, it also makes a difference in raw sequential throughput for HPC applications. These are currently edge cases in the "second platform" which is likely to be the vast majority of IT spend for the next decade or so,

    3. Extending PCi as a network ... that was what infiniband was designed to do, in some respects, PCi is just a cheap and nasty local only version of infiniband. Personally I've been a fan of RDMA technologies for about 10 years, and RoCEE also looks pretty good.

    4. Any of these "Server SAN's require a fast well designed east-west "Server Area Network" .. which was what a "SAN" was before it became a "Storage Area Network" .. ie. you're still accessing data over the network for any "non-local" data.

    Having said that there is some merit to using local, ultra-low latency well designed east-west server network, but overcoming the "IO Bound SANs" is not one of them. If you want to pull a server area network to aggregate the CPU and storage class memory resources of multiple disparate computing resources into something that looks like one large physical resource, then its all good, but for the most part, servers, storage and networking all have independent scaling requirements and keeping them separate still makes economic sense provided you have effective ways of dealing with the additional complexity.


    John Martin

This topic is closed for new posts.

Biting the hand that feeds IT © 1998–2019