back to article SAN vs NAS: Spelling out the differences

The names almost give away the difference between network attached storage (NAS) and storage area networks (SANs): you would expect a NAS to consist just of storage and a SAN to be a network, and that is true – up to a point. Designed to be easy to manage, a NAS is fundamentally a bunch of disks, usually arranged in a Raid and …

COMMENTS

This topic is closed for new posts.
  1. Fuzz

    iSCSI?

    that's all

  2. Paul Hargreaves
    Stop

    NAS vs SAN or "cheap array" vs "expensive array"

    > the key difference to focus on is whether or not you need the top performance and reliability of a SAN and are prepared to pay the premium. If not, you need a NAS.

    NAS vs. SAN misses the point. The quality of the array and requirements of the host OS and software is much more important than the colour of the network cable, unless you are looking at the ultra-low end where a $100 NAS box will be much worse than a $10000 SAN box.

    At the mid/high end, a high quality array will do NAS and SAN equally well, and will also allow apps to run over both classes of network equally well (and in most cases, at the same time).

    BTW - someone needs to tell Oracle or VMware that NAS is not performant or reliable.

    1. Anonymous Coward
      Anonymous Coward

      You had me agreeing....

      ....until the last paragraph.

      VMware on NFS, if configured properly, is at least as good as iSCSI, and flexibility wise blows it out of the water. NFS scales to many more VMs in a store, and is easier to re-size, and that's before you start looking at snapshot capable boxes like NetApp.

  3. Anonymous Coward
    Linux

    iSCSI

    " Fibre Channel over Ethernet looks set to become the de facto standard for storage over the next decade."

    Not really. FCoE as a lot of drawbacks. It's only "real" advantage is people are used to the fiber channel protocol.

    Using iSCSI over IP is a much better solution. One of the reasons being that, unlike FCoE, it can be switched. It also as a lot of other advantages over FCoE. That allows for MUCH easier scalling of large SANs.

    1. Anonymous Coward
      FAIL

      You mean routed?

      Of course FCcE can be switched, it just can't be routed!

    2. Matt Piechota
      Alert

      Reaction time

      "Using iSCSI over IP is a much better solution. One of the reasons being that, unlike FCoE, it can be switched. It also as a lot of other advantages over FCoE. That allows for MUCH easier scalling of large SANs."

      Unless you're doing something relatively deterministic since TCP (not to mention routing) can't guarantee delivery in a specified amount of time.

  4. Radek
    Thumb Down

    Few good points plus one rubbish conclusion

    "the key difference to focus on is whether or not you need the top performance and reliability of a SAN and are prepared to pay the premium. If not, you need a NAS"

    Really??? Probably the author missed the fact that some NAS devices can be much beefier (& much more expensive) than a mediocre SAN. Few examples can be found here:

    http://www.gartner.com/technology/media-products/newsletters/netapp/issue24/gartner4.html

  5. Anonymous Coward
    Badgers

    Do tell...

    Was this story inspired by a user comment?

    http://forums.theregister.co.uk/post/972787

  6. Tommygun
    Thumb Down

    SAN 'expense' vs NAS 'cheapness'

    I think this article kind of misses pointing out that not *all* NAS devices have the capability of providing block-based storage too...

  7. Anonymous Coward
    Thumb Up

    san

    I've got a big virtualised desktop pain, which was cured with my hefty SAN. It's the only answer.

  8. GreenOgre
    Boffin

    Recipe for a SAN...

    A SAN is just NAS on a dedicated network with a larger price tag. Fibre Channel was largely a means to keep the vendor margin up.

    DIY SAN:

    Take 1 x86 server with a bunch of disks (recycled server is fine, it has the horsepower)

    Add OpenFiler.com or FreeNAS.org and enable iSCSI

    Add one or more application servers with dedicated NICS for storage network (any OS, even MS Windows does iSCSI now).

    Mix all together through a Gigabit switch.

    Prep time: 1hour

    Bake time: 0

    For that little extra something, use two NICS per host and bond the channels to get 2Gb/s and reliable connections.

    1. Kebabbert

      Solaris 11 Express

      also offers iSCSI and NAS functionality, via COMSTAR. Which was used in the TPC-C world record by Oracle, at 27 million tmpc recently.

    2. pPPPP

      Recipe for a SAN

      Umm that's not a SAN. That's NAS. It's attached to the network. If you use iSCSI then, yes it's block level storage, but it's not a SAN. It'll also not perform very well, although that's subjective of course. I wouldn't run an enterprise on it,. It's not going to supply DR capability with a decent RPO and RTO.

      That said, I've got an old K6 running Freenas at home, and it does the job for me.

    3. Tommygun

      Bake your own SAN?

      And how to you guarantee throughput between the storage and an application? And what happens as you expand your storage capacity? And where's the redundancy? Businesses don't just use SANs for performance - it's the reliability that's key. That's how EMC Symmetrix has held the top spot in datacentre SANs.

  9. Gordan
    Flame

    Buh?

    The article waffles around the subject and fails to point out the one fundamental distinguishing feature between SAN and NAS.

    NAS exports a network file system, typically via NFS or CIFS.

    SAN exports block devices upon which a standard (local or cluster) file system can be created.

    This is the beginning and the end of the distinction between them.

  10. Paul Crawford Silver badge

    Missing the point (again)

    Most users want files, therefore they really want a NAS.

    Few users need block storage, and typically only for some high end things like databases or sucky email servers (actually, if you know one that does not suck, please let me know), and for most users they would see the SAN through a server which mounts the file system(s) of choice over it.

    Cost & reliability are often correlated, but not sucking at something is sadly rare :(

    As others have pointed out, you can have block access using iSCSI from a NAS-like unit, so you can have all of them in one device.

    Backing up? Now there is an interesting situation, as block SAN has no internal idea of *what* each block holds, so you can snapshot and save, but not on a per file/per user basis, and you can't exclude crud like user's browser cache, etc.

    With a NAS you can do both (snapshot and selectively backup/restore).

    However, you need it to run a file system and protocol that works for your users, and there are some applications (both Windows and Linux) that seem broken on network mounts due to them not completely behaving like the low level local file system expected. Crap design for sure, but if you must use them and have remote high reliability storage, you may need SAN/iSCSI with your user putting the file system on the served-up block to solve that problem.

    Ultimately, you normally want something to keep all key data in one reliable place, and to allow proper protection by replication/backing up so your users don't have to. As they won't in most case know or care until it is too late...

    1. Anonymous Coward
      Anonymous Coward

      Bacukp offline?

      @Paul - if you want to backup a large SAN hosted volume, take a snapshot and mount it up on a separate dedicated mount server where your tape drives are. This way you get all the goodness of SAN, very fast tape backups, selective restores of individual files etc, inc/diff backups if required and all of that offline - you only need to quiesce your production server in order to take the snap. Perhaps the best thing about mount server backups is that you get to use one set of tape drive licenses so your backup software costs far less for equivalent service levels.

      1. Paul Crawford Silver badge
        Thumb Up

        @Bacukp offline?

        Very good suggestion, though you need to know just what users have in place and a server that supports all of them. I guess for typical use (mix of Windows & Linux) you only really expect to have NTFS and ext3/4 to mount for such nifty tricks, so I guess most Linux boxes will do it.

        I am not sure quite how selective restoring of file(s) would work. Probably you need to take the current volume off-line (so everything is consistent) then mount on the tape server, restore by file, and re-mount on the user's machine?

        1. Anonymous Coward
          Anonymous Coward

          Stuff...

          The way I do it (I've been designing this sort of system for my employer) is to have one mount server per OS with shared drives between them. It's probably tempting fate to get linux to backup Windows and vice versa. With the unixes we use VxFS so theoretically we'd be able to just use one box for aix/solaris/hpux.

          For restores, we do small individual file restores on the mount server and use a network copy to move them over to the production server. You can also do a directed restore in most backup packages which can restore from the tape drive on the mount server and redirect the file to the production host. For larger restores you need to think a bit, if it's the whole filesystem just restore from tape to the snapshot and then restore the snapshot over the production disks. For other restores, where it's a large amount of the filesystem and too much to network copy, sync up the snapshot restore the files you want back then restore the snapshot over the prod disk. The latter is on the complex side, and wouldn't typically be needed, admittedly.

          1. Paul Crawford Silver badge
            Thumb Up

            @Stuff

            Thanks for the advice AC. Yes, you are right as there are things like the "NTFS alternate data stream" that have no obvious Linux equivalent so really need a Windows-based server to access them reliably.

            [One could argue that is a dumb feature, mostly of use to hiding Trojans from what I have seen, but the fact it *might* be used in a key application needs to be considered for a reliable backup/restore]

            Clearly there is a lot of things to be considered when you have block-based SAN/iSCSI plus the need for centralised and efficient backing up. Will el Reg step up to this challenge?

  11. friet
    Stop

    He's obviously not a storage guy....missing the point completely...

    The continuous debate of NAS vs SAN based Arrays. First of all notice the difference..SAN is indeed only a network..and i couldn't care less if it is Fibre Channel, FCOe (or better CEE), iSCSI, or whatever protocol comes along in the future to talk accross that network.

    The discussion should be : file server (=NAS) vs external disk array (= what you connect to a SAN).. And again the thing is : a file server is primarily used to connect PCs or Macs.. to provide file shares and nothing else.

    An external array is designed to connect to servers, not PCs (and a role of one of those servers could be..file serving).. It is mostly used for applications like Exchange, SQL, Oracle to name a few. Those are not built to use file shares to host their databases. They expect a 'dedicated' disk (in case of an external array,a volume presented to them).

    And as someone else also mentioned yes, there are NAS devices out there that can also act as an external disk array (eg. using openfiler or FreeNAS). What they fail to mention is that using a standard file server will introduce single points of failure... now, i don't want to bet my business on a device without redundant controller capabilities...

    And yes, there are also NAS solutions out there that do tackle the 'SPOF' issue..but then you already see where we're going...its actually two NAS boxes connected to...an external disk array..voila..full circle..

    There should be no debate NAS /SAN ...both have their use.

    Regards,

    Frank

  12. Jeff 11
    Thumb Down

    Tired of these NAS vs SAN articles

    Pretty much the ONLY question that should be on anyone's mind when asked to choose between these two techs is "What's my I/O profile?". And if you can't answer this question, you need to ask someone who can.

    A block device is much, much more efficient than a NAS when doing block-level operations. These are usually high transaction volume database, streaming and VM applications, all of which require high frequencies of small reads and writes that you'd get writing to a local HDD. If these frequencies are especially high, or you have more than a small number of clients that need this sort of I/O profile, you need a SAN, because a NAS usually won't keep up without a hefty amount of cache RAM to buffer the transaction volume.

    As for iSCSI, it's a limited technology because of TCP/IP latency. There are I/O profiles where a 10Gb iSCSI connection will be much, much slower than 2Gb fibre, or channel bonded FCoE. But there are other profiles that suit it quite well, because it's often much faster than SMB/NFS with the right hardware, you're not beholden to any particular filesystem, and you can have real-time, high availability replication if your network infrastructure is designed properly.

    As for the author's seemingly key 'fact' about SANs being inherently more reliable: this isn't true. They're inherently more corruptible from client failure than NAS because the client can disconnect mid-transaction, without indicating to the SAN that things have been left in an unfinished state. You can push the responsibility of this to the filesystem, but then that usually precludes using a heterogeneous mix of clients.

  13. Anonymous Coward
    Boffin

    Is this author actually every worked with SAN's & NAS's?

    "...top performance and reliability of a SAN..." Are you kidding? SANs are incredibly fragile when it comes to writes, and incredible slow. Every SAN I've managed has had issues where it would break it's RAID configuration every 6 months due to the fragility of SAN network communication, requiring extensive rebuilds. And they were slow as hell on writes too, compared to the NAS's. One by one, I've replaced every SAN with a NAS, and had better performance and reliability. Everyone I know has had similar experiences.

    1. DJUNIX
      Stop

      Your kidding ... right?

      I like to see you put a EMR like Epic or Cerner on a NAS and come back and tell me its more reliable and more stable than a SAN (which for a EMR should be on a Tier 1 array).

      1. NoSh*tSherlock!
        WTF?

        OK DJUNIX - how about CERN telling you ?

        Go look at https://openlab-mu-internal.web.cern.ch/openlab-mu-internal/03_Documents/4_Presentations/Slides/2010-list/2010OOW-08.pdf

        But sadly this whole NAS/SAN thing has been back to front thinking for years - feel like I am suddenly in 2000 when we had these debates

        Spending my time with Cloud - using REST now and catch up to the real world with stuff like NFS4, pNFS, iWARP and RDMA if you want to be up to date.

        Even FCoE is an acknowledgement that at the switch and cable layer FC has been swallowed

    2. JL 1

      you miss the point as well

      A badly configured SAN environment will be every bit as crappy as a badly configured NAS environment. A badly configured NAS will be worse than a decent SAN. In fact, there is more to go wrong in a NAS environment (all that funky file system WAFL, support for all those protocols, and normally data transfer over a non-dedicated IP network). However, set up an enterprise grade NAS environment properly and you'll find a few things:

      1. it's cheaper and more functional than an enterprise grade SAN environment

      2. it'll have equivalent reliability to a decent SAN environment

      3. it LOVES big database, particularly Oracle

      4. it LOVES Vmware and Vmware LOVES it

      5. with a dedicated 10GbE environment from server to switch it performs like a champ

      Of course, if you set up the NAS environment poorly than a decent SAN will thrash it.

      Don't make an industry wide comment based on your narrow experience.

      JL

    3. Steven Jones

      Try working on a proper SAN

      "Every SAN I've managed has had issues where it would break it's RAID configuration every 6 months due to the fragility of SAN network communication, requiring extensive rebuilds."

      I'm going to make a guess that you've never worked on a full-scale enterprise SAN environment as the above is twaddle. Whoever was buying, configuring or supporting the kit didn't know what he or she was doing. Anybody who works on large enterprise systems with SANs will know that you need top quality kit, support and fundamentally good design. Do it on the cheap and you'll have a disaster on your hands. However, if you need a rock-solid, highly-peformant high throughput system to support huge databases (10s ot TB) over multiple nodes, then an enterprise FC SAN is what will deliver it. Scale that down, don't include redundant paths, path-balancing software or people who know how to manage the things and you will be in a mess. When you have hundreds of servers all relying on storage from a common set of SAN arrays then they have to be absolutely rock-solid. Quite simply a failure in one of our SAN arrays would stop hundreds of servers dead, either because they rely directly on the array, or on servers that are.

      I should also add that NAS is almost equally critical in many data centres. There are many very large apps which rely on NAS just as much as SAN. For instance, put a large Siebel system together with many app servers and the DB will usually be on SAN with other, unstructured data held on on a NAS server. In a large enterprise the NAS has to have similar availability to SAN, although it can't match on throughput or predictable response times.

      Also, as somebody has pointed out, the important difference between NAS & SAN is the presentation. NAS presents networked file services, SAN presents devices. Simply with a SAN the file systems are local to the servers, with a NAS the file system is held centrally. Some NAS servers can also offer SAN by presenting devices to servers which are mapped to files on the NAS box, which is convenient for consolidation, but doesn't provide for particularly high throughput.

  14. Anonymous Coward
    Stop

    FCoE will never come to fruition...

    FCoE will not be the future. What we will see is traditional FC infrastructures disappearing naturally over hardware lifecycles. However instead of FCoE replacements we will see the dominance of iSCSI and NFS. Performance wise with DCE/CEE and 10Gbps+ the latency and performance requirements that we once only possible by builiding an oversubscribed, physically separate SAN can easily be realised on an TCP over IP over Ethernet network with performance managed through QoS and DCE features such as Priority Flow Control. For those extra high-performance environments jumbo frames can be carefully enabled.

    It is the VoIP performance debate of approx 10 years ago when we wondered if a IP and Ethernet network could offer the performance guarantees for successful end-to-end IP telephony. RTT of a couple of ms in the data centre is surely good enough for most storage solutions?

This topic is closed for new posts.

Other stories you might like