back to article Don't let the SAN go down on me: Is the storage array on its way OUT?

With EMC saying it plans to put VMAX into the "capacity tier"* and suggesting that performance cannot be met by the traditional SAN, are we finally beginning to look at the death of the storage array? The storage array as a shared monolithic device came about almost directly as the result of distributed computing; the …

COMMENTS

This topic is closed for new posts.
  1. Tony Rogerson

    Yes - SAN has had it's place and certainly within the database area has not always been successful because of the black box approach that frustrates the hell out of folk trying to manage performance of said database - the DBA's. You don't see a SAN in the cloud thankfully, just commodity kit and DAS - at last!

    Software has evolved to form reliable and performance rich distributed data processing aka Hadoop, Cassandra, Volt-DB to mention just 3.

    Hardware has evolved - it's great to see even PCI based SSD being threatened with the true end goal of persistent memory - we are now able to put TB's of SSD directly into memory sockets.

    It's going to be an interesting few years to come; and thankfully and hopefully that will be without SAN's!

    1. Anonymous Coward
      Anonymous Coward

      Speaking as a storage guy, who's worked with extremely large databases and SANs. The idea isn't that it's black boxed, but the key is that the OS guy, DBA and Storage guy have to work together to design a system. I have never met a DBA who knows enough about storage arrays to be able to configure one for performance, in the same way that I've never met a Storage guy who knows enough about databases to do the same. That said, I've met enough of both who think they do... OS guys sometimes seem to hover in a middle area where they know the OS and software stack, or the OS and hardware stack.

      Even if you storage is moved locally, you're still going to have the same problems, just attached to a different bus. Sure, it may be fast enough without bespoke configuration to start with, but eventually, you'll come across the same problems as the monolithic array on the end of a SAN and some new ones to boot.

      Oh and you really do see SAN in the cloud, maybe not at Google or Facebook, but at most others you do.

      1. Anonymous Coward
        Anonymous Coward

        @AC 9:04

        I would go as far as to say the OS guy does not know enough about either to be useful other than keep the drivers up to date.

        Database design down to query and table level needs to be understood well enough to provide specs a storage girl can design against.

        Once understood, the storage can be configured for the data rate at the logical disk.

        1. dan1980

          Re: @AC 9:04

          @AC 7:57

          "Database design down to query and table level needs to be understood well enough to provide specs a storage girl can design against."

          That's the heart of the matter - if, as a storage person, you don't have good numbers from the applications people then how exactly are you supposed to do anything other than provide a generic, white-paper solution?

          Sometimes, of course, that's the only way and extra performance therefore comes from throwing money at the problem, adding more SSDs and cache. But that shouldn't be seen as a failing specifically of SANs - just of the disconnect between application workloads and hardware that can occur.

          Of course it also depends on how you are provisioning your storage - specifically, whether you are deploying storage solutions dedicated for a specific workload or if you have deployed a more 'shared' system with a single storage platform used for many applications.

          Most complete installations use both as even if the majority of your hardware is devoted to storing and processing data, that's useless without other elements such as the presentation level. Most businesses also need to cater to day-to-day operational needs, including provisioning e-mail, intranets, terminal servers/virtual desktops, etc...

    2. pPPPP

      >You don't see a SAN in the cloud thankfully, just commodity kit and DAS - at last!

      Are you sure? You don't think that underneath that cloud there may possibly be storage arrays protecting your data?

      Putting SSDs (or more likely flash memory - why would you want solid-state disks?) into a server is all well and good, but what happens if the server fails? Well, you could introduce clustering, but that means your storage will have to go outside the server, otherwise it will fail when the server fails. I know, why not have a shared storage appliance that all of the servers in the cluster can access?

      Shared storage still has its place, and data protection and disaster recovery dictate that it's not going to go away. How you share the storage is another matter. Dedicated arrays of disks may have a limited life, but they're still going to be around for a while in one way or another. It's not just going to suddenly disappear, just like the so-called "cloud" doesn't mean that physical hardware ceases to exist.

  2. Anonymous Coward
    Anonymous Coward

    The storage startups...

    I'm always a bit suspicious of the sort of storage start ups who make the most noise these days. You know the type, "we can fix a zettabyte of flash into 4u, do everything that EMC/IBM/HDS can do with their arrays, but at twice the IOPS", these companies often seem to be on their third product, with no customers to speak of, and a trail of venture capital that's been spaffed on swanky offices as much as it has on developing the product. Why indeed would you bet the company on these companies?

    Their sole reason for existing in this phase seems to be to make some cursory development of a product and then have one of the big boys step in to finish it off and market it for them. Now, there isn't anything bad, per se, about this being your end goal, but it does mean that no-one would want to buy the product, until it's been bought by one of the big boys.

  3. Anonymous Coward
    Anonymous Coward

    I can tell you that we are currently phasing out our DELL EqualLogic arrays. We used to use them to handle all our database volumes, however, anything above a total of 2.5K IOPS seems to make it unbearably slow.

    We bought large 15K SAS discs for all our database servers and currently only run replication servers on the SAN, so that we can quickly restore a volume should be necessary.

    I honestly find it unbelievable that a single simple consumer SSD can easily handle 40K IOPS, while our SAN array (in excess of 100K quid) has trouble handling 4K IOPS.

    1. Alan Brown Silver badge

      "I honestly find it unbelievable that a single simple consumer SSD can easily handle 40K IOPS, while our SAN array (in excess of 100K quid) has trouble handling 4K IOPS."

      Put simply: Controllers.

      Most SAN array controllers simply aren't very fast.

      BTW: I don't get the coment about F/C - in my experience "it just works" and works fast. We had some qlogic/brocade interoperabilty issues about a decade ago but that was down to crufty brocade guis not telling the truth about what was happening at CLI level and in any case it didn't stop anything working.

    2. Lusty

      "anything above a total of 2.5K IOPS seems to make it unbearably slow."

      If you're being serious with that statement then you or one of your colleagues have misconfigured quite badly. The Equallogic is capable of many more IOPS than that and I have measured this on many customer sites (I work for an independent consultancy, not for Dell...).

      Things to look at:

      Did you choose RAID 50 instead of RAID 10 as recommended?

      Did you configure your MPIO using the Dell best practice rather than the VMware/Microsoft ones?

      Did you install the correct HIT kit?

      Did you update the software to the latest which drastically improves performance in multi array solutions?

      Did you properly design the solution, or bung different models and disks in and hope for the best?

      Have you enabled Jumbo frames throughout?

      Have you enabled flow control throughout?

      Have you configured a non-blocking network dedicated to storage using switches with sufficient cache and backplane bandwidth?

      Did you configure the two sides of this network with a large enough ISL or stack cable?

      The list goes on, but if you've not done all of this then chances are your new solution will eventually disapoint you as well!

      1. pPPPP

        The first question I would ask is how many drives are in the array and how many databases are trying to use those drives simultaneously? Storage arrays have cache which reduces the amount of I/O to the backend disks, but you're still going to get contention. Two disks shared between two servers/applications will not run faster than putting a disk into each server. Quite the opposite.

        1. Lusty

          "Storage arrays have cache which reduces the amount of I/O to the backend disks"

          Not usually. Cache delivers the same IO to the disk but smoothes out the delivery of that IO to allow for the burstiness of data to the SAN. NetApp do reduce the disk IO in a way by lumping writes together and making them sequential but most vendors just use the cache to smooth out the load and as somewhere to put the data while RAID and parity gubbins are worked out.

          1. pPPPP

            That's not true. There are many applications which continuously operate on the same data again and again. While writes will be periodically destaged, reads are served repeatedly from cache. Where these reads are small-block random I/Os that results in significantly fewer reads from the backend disks.

            It all depends on how much cache you actually have. More cache, with an intelligently chosen block size, will significantly reduce the dependency on the backend. This is the main reason why those big enterprise systems outperform the midrange systems. They still have the same disk drives and have to adhere to the same rules. Serving I/O from cache improves performance.

            RAID write penalties are barely significant in enterprise arrays nowadays, with the obvious exception of rebuild times, hence the increasing prevalence of distributed RAID.

            1. Jason Ozolins
              Meh

              Cache data where it is most effective

              Yes, but caching can be done on SAN clients as well as arrays. Any application that runs against a single-mount filesystem can cache data locally to reduce re-reads. The amount of local cache scales up easily with the number of SAN clients, and filling out DIMM slots with best bang/buck size modules is a cheap way to buy cache. And yes, if the data is on a cluster filesystem then the benefits of client caching depend a lot more on the type of app and the particular filesystem: Oracle RAC for instance manages cache coherence across multiple clients on shared database files at the application level, bypasses OS caching altogether, and AFAIK can pass cached data from one RAC node across a fast interconnect like Infiniband to another RAC node rather than making the second client read from database shared storage; in fact, over Infiniband the requesting client may not even have to make a system call to receive the data.

              Cache on storage arrays is much more expensive per byte than local client RAM; on midrange arrays with set amounts per controller it is not that large compared to the total cache available on a few well-sized SAN clients, and for high-end gear like Hitachi virtualizing controllers, cache upgrades cost so much that a couple of years back, the storage admins at my University ended up in a sorry bind where they knew they needed more cache, but simply couldn't raise the money to get the upgrade. This scarce and expensive resource is best used to do things that *can't* easily be done with cache on local clients, like:

              - reliable (mirrored, nonvolatile) write-behind caching, for write aggregation, annulment (quickly rewritten filesystem journal blocks, etc), and load smoothing (assuming there's any idle time!)

              - speculative readahead of sequential data during idle time; the array is the only thing that can really know if the disks are actually idle

              - reducing data read in common across *multiple clients*; for instance, base OS disk images in a copy-on-write VMFS disk hosting setup, or copy-on-write cloned SAN volumes. Or, as on the clusters at my workplace, lots of cluster nodes all reading the same executable and source data when a large parallel job starts. But in that case, the filesystem is all on JBODs, and Lustre object servers are doing the read caching, with terabytes of aggregate cache across all the object servers, at low cost per GB of cache.

              As for high-end arrays beating midrange arrays with the same quantity of disk due to lots of cache: apart from cache sizes, the number and speed of host-side and drive-side interfaces on the SAN controllers will certainly make a difference for non-random I/O benchmarks, given a large enough number of disks in the array, and then the controller architecture needs to be capable of feeding those interfaces. There are a lot of ways to get more performance from the same (sufficiently large) number of drives.

  4. hmas

    Is the SAN on it's way out?

    Not by a long way. There is definitely an issue compounded by the prevalence of lots of different silos of storage infrastructure within a lot of sizeable organizations. They tend to procure storage on a per application or per business unit basis, so you get a sprawl of low to mid range storage.

    Moving storage from shared arrays onto local servers and pooling won't solve the issue. It'll require extensive planning and you still haven't address the silo mentality that led to capacity being hoarded on a per department basis.

    Maybe what you need is a better SAN and better processes to manage capacity, demand and chargeback.

    1. a_milan

      Re: Is the SAN on it's way out?

      Spot on - properly sized and bought midrange storage nowadays along with a SAN and some VMware can do lots of stuff very very cost-effectively.

      Achieving the same kind of performance, ease of management, predictability and utilization with any other technology is simply not possible, IMO.

      Ten gig ethernet is still way off on ease of configuration and management; you need a separate LAN for data traffic, separate settings (jumbo frames anyone?), etc. etc. Adapter prices are comparable these days and Cisco switches easily beat 8G FC on price per port (which is ridiculous), add to that essential non-understanding of data traffic issues from LAN admins and you have a firm business case for SAN.

      How many people are willing to put it down on paper is sadly a different matter altogether.

  5. thondwe

    DIY SAN

    Given Microsofts direction with Windows "Scale out file servers" and Shared SAS JBODs, and others, no doubt, in the Open source space, I say there's a move to building a DIY "SAN" out of servers + software + commodity hardware and the death nell is on the dedicated SAN controllers - which in our HP SANs (3PAR + Lefthand) are just servers anyways. Yes there's some clever ASICs in the servers, but with SSDs and Untrim etc., these are becoming less critical to adding performance/managing storage.

    It's much like the load balancer/firewall/router bits - usually a Linux OS + some applications. Either it's now a vrtual appliance or a physical appliance - a.k.a. an Intel server board and some dedicated cards.

    Data centre of the future - racks of different vendors servers, JBOD enclosures, and switches but looks uniform because it's all running the same software stack.

    1. Tony Rogerson

      Re: DIY SAN

      Absolutely spot on; if you look at what the main cloud providers - Microsoft and Amazon do in terms of their storage - that is exactly the approach they have taken.

      Microsoft Azure is not SAN based, nor is AWS.

      Hadoop is successful not just because it does unstructured data via MR but you can chop the data up, distribute it and have the processing work where the data is (locally) which is the exact opposite of the SAN approach where you move the data off the SAN to the server.

      It's going to take another few years but the need SAN's met can be more easily and cost effectively met with software and commodity kit.

      I guess once we are all on the cloud then the SAN V DAS argument will be mute anyway - because the major cloud vendors ain't SAN based! :)

      T

      1. pPPPP

        Re: DIY SAN

        Hadoop is fine for big data but it's not appropriate everywhere.

        If you have a critical business application which requires sub millisecond response time, plus synchronous replicas of all data in several physical locations, with the ability to recover from site failure in seconds/minutes, would you really consider putting it on an Amazon cloud? Oh yes, did I mention that said application has sensitive government data which must be protected from prying eyes at all times? You'd put that on the cloud???

    2. Roland6 Silver badge

      Re: DIY SAN

      Full circle? SAN came out of DIY storage arrays, because as any one who has been there will tell you building a stable and performant array is difficult...

      If memory serves me correctly, the first generation EMC controllers were rebadged DG Unix boxes...

      Whilst I expect that we will see open source software that will enable the creation of 'diy' storage array's, many businesses will still take the off-the-shelf solution, as it gives a better time-to-market and will require less effort to maintain.

      1. Danny 14

        Re: DIY SAN

        Refurbished sans are really coming down in price too. Smaller systems can get decent dell offerings for a few k now. With server 2012 you can cluster md32xx over sas hba too simplifying 4 host setups, again good for smaller (and some medium setups). Sas hba and md32xx can give very decent iops for minimal outlay but you are constrained with hosts (unless you farm it out as smb3 on 2012).not ideal but cheap.

  6. Anonymous Coward
    Anonymous Coward

    Clear visibility of the performance characteristics with the FC-SAN could change many a mind about the validity of the classic FC fabric...but most don't have tools to do it. Potentially instead if throwing the baby out with the bathwater, investments in performance tools like OCI, Virtual Instruments and/or SolarWinds would be a start.

  7. M. B.

    SAN is not on it's way out...

    It's just being joined by alternatives to SAN. There are many ways to accomplish something, some are better than others. SANs are really just software running on servers. My understanding is Dell Compellent is literally just the storage OS running on a pair of PowerEdge R720 servers with a write cache and some I/O cards in them. There is no magic in the box, all the clever stuff happens in the software. The HP 3PAR stuff is a Xyratex chassis with a custom ASIC. IBM StorWize v7000 is the exact same chassis, minus the custom silicon.

    There has been a big push for SANs in many environments due to cost. If you spend $500,000 on disks and controllers and software licensing and a project comes along where DAS is optimal, for far too long people have been told "No, put it on the SAN" because it just plain cost a lot.

    I think we will see that change over the next 5 years, where DAS and shared-nothing clusters become a reality for some projects, PCIe SSD will become the reality for others, fibre channel will remain in place for legacy implementations, FCoE and iSCSI over 10GbE using DCB will become more commonplace for new implementations, NAS storage will continue to serve in both scale-up and scale-out varieties, purpose-built compression/deduplication appliances will provide long-term archiving and backup retention and spinning off to tape depending on the business legal requirements, and finally replicating anything to anywhere whether it's a physical piece(s) of hardware or off in the cloud.

    The real winners will be the ones who can tie all of it together and manage it all as a single storage pool and place new data appropriately across that pool regardless of how it transports the data (both physically at the cabling side of things and logically at the protocol level), and replicate all of the pool to other physical locations or to the cloud with the highest levels of availability, resiliency, data consistency, and performance.

  8. RonWheeler

    Jumbo frames

    Several mentions of jumbo frames on here. Lots of people seem to think it is some kind of silver bullet for SAN issues. Interesting article on real world performance over at this link:

    http://www.boche.net/blog/index.php/2011/01/24/jumbo-frames-comparison-testing-with-ip-storage-and-vmotion/

    1. M. B.

      Re: Jumbo frames

      The Dell EqualLogic Best Practices guide recommends jumbo frames for best performance, relevant in the above example of a poorly-performing EQL.

      It's not always the best choice I'll agree, and I used to benchmark with and without and give the results to my customers so they can decide (IBM and HP storage back in my consulting days).

      But when it's specifically stated to use jumbo frames, it's a good bet the vendor is recommending it for a reason (in Dells case, much lower host-side CPU util from what I've seen on our ps4000e and ps5000e).

    2. Getriebe

      Re: Jumbo frames

      @RonWheeler - yup, jumbo frames slow our application down, and our document of standard practice is switch them off.

      As ususal it needs careful analysis and deep knowledge to get it all optimal.

  9. Matt Bryant Silver badge
    WTF?

    Son of SAN, version 2.0....,

    So what is this "SAN" you talk of? You seem to mean big, monolithic arrays, specifically fibre-channel attach ones, and ones with a single pair of controllers? If so, then the answer is "yes, they are going out", but that is because they are being replaced by multi-controller arrays which mix protocols such as iSCSI, FC and FCoE from the same controllers. These new scale-out SAN arrays will pool storage and allow the business to route and store data as the application requires. Software-based storage, so-called virtualized storage appliances, offer even greater flexibility for these new SANs, allowing LUNs presented out from central SAN arrays to be cheaply presented and replicated to virtual storage servers out in branch offices.

    DAS in enterprise? Puh-lease, have you forgotten why we moved away from DAS in the first place? Poor utilization, too power hungry, poor resilience, poor central management options, awful centralised backup over the network, and poor inter-server performance.

    1. This post has been deleted by its author

    2. G Olson

      Re: Son of SAN, version 2.0....,

      " Puh-lease, have you forgotten why we moved away from DAS in the first place? Poor utilization, too power hungry, poor resilience, poor central management options, awful centralised backup over the network, and poor inter-server performance."

      RING THE BELL, RING THE BELL, RING THE BELL!!!!

      Specify and purchase the right SAN with the right connectivity and the right software, all this DAS madness goes away. During the cost justification, don't forget the amount of hours@$$$ in tech support to keep DAS madness operating. Then compare to the costs of a properly implemented mid-range SAN for your SME shop over an effective lifetime of the SAN and less manpower. SAN is not going away any time soon.

      Certain technology projects which require Hadoop or other non-SQL data storage will use some DAS. But you would be surprised at how you can implement Hadoop on a storage network and reduce your overhead by eliminating some of that data replication.

  10. Anonymous Coward
    Anonymous Coward

    But what are they selling?

    When you copy and paste an EMC press release as an editorial feature make sure you hammer home what they are actually selling to fix the "problem".

  11. Anonymous Coward
    Anonymous Coward

    The answer

    Is Gluster

  12. dan1980

    Horses, course, etc...

    The solution to any problem is more buzzwords and marketing-speak from vendors.

    Obviously.

    In reality, though, the question should always come back to: "what problem are you trying to solve".

    SANs solve a LOT of problems but of course create their own in turn. DAS solves a set of problems and, again, creates some. This is nothing new - IT folk have been dealing with such compromises forever.

    Simply saying that SAN is on its way out or that 'clouds' aren't using or don't need SANs presumes that everyone is trying to solve the same problem(s), which is clearly ridiculous. Hammer, nail, &c.

    Hadoop and distributed file systems have great benefits . . . for certain types of workloads. The thing is, though, that one great use of Hadoop and the alternative distributed computing frameworks is in processing data generated by some other application. The point being that while the analysis of the data is managed with distributed storage across numerous compute nodes, that data was likely generated by a more 'conventional' system - one that probably involved a SAN.

    Take a SAP implementation for example - SAP can be integrated with Hadoop for analytics but the core SAP application and modules will almost certainly be running with the help of SAN storage. Doing it any other way is just asking for problems, trying to manage capacity, scalability and of course the ever-present issue of backups.

    Having a Hadoop cluster with distributed storage to analyse your 'big data' is great but useless without some other system that is feeding it that data.

    The point is that something like Hadoop is a component in many such solutions and DAS is a good match for that, but most real systems have multiple components, each of which may benefit from a different storage architecture.

  13. Fenton

    The trouble with SANs.

    We're using SANs (EMC) extensively.

    Whilst on Paper they should be easy to use, in practice any configuration seems to take a long time.

    Allocating Luns to volume groups, then allocating the same Luns to SRDF, then configuring in the Luns on the secondary site.

    Then configurations of SNAPs and Clones. Everything has to happen at Lun/VGlevel level.

    As somebody who uses the storage as a Service this causes considerable pain and time, every time we need to implement a change.

    I would love to be able to grab a lump of virtual (and thin provisioned) storage allocate to my filesystems and then at a file system level be able to say what I want snapped/cloned/replicated.

    1. Anonymous Coward
      Anonymous Coward

      Re: The trouble with SANs.

      You can do what you would like to with DMX/VMax, you can even make it FAST (fully automated storage tiering) so that any tracks which are particularly hot get moved onto faster disk.

      You can also configure metas/hypers/groups/directors/etc/etc, there are different reasons for each method, but it shouldn't take such a long time to do either. More likely it's a change control or workload problem which is preventing your storage support team from being as quick as you want.

      1. dan1980

        Re: The trouble with SANs.

        @AC 14:56

        Did I miss something or just not read between the lines? Where did the poster (fenton) say they had tiered storage or a 'storage support team'?

        Of course they MAY, but those are things that are only feasible for large, high-end deployments so it's not necessarily as simple as that. Or maybe it is.

  14. David Grindrod

    I think the believe that service side storage will replace central SAN storage is really not valid. There will be room for server side SSD, central FC storage and central network based storage.

    Server side storage (SSD) is the way to go when you need fast low latency storage but at least at present is not applicable when synchronous mirroring to disaster recover site is required or a legal requirement. When ever such replication is required with present technologies then speed of local SSD will be masked by the speed to write to the remote location.

    Centralised storage be it SAN or NAS based have all the tools included for replication and backup and these are often forgotten restrictions in the equation with the present "need for speed" of storage requests.

    For sure server side storage is eating into central SAN storage place in the market but the way the storage market is going is to make devices that can do both SAN and NAS seamlessly from a central location. Also the main storage suppliers are desperately trying to include the server side storage into their central storage units rather than the other way around.

    Server side storage in my opinion is not likely to replace central storage silos in the immediate future although they will be merged with the SAN and NAS presentation of storage.

This topic is closed for new posts.