back to article Cisco slings speedier SAN switches

Cisco's taken the whip to the FibreChannel horse, shipping a bunch of kit ready for the next iteration of the venerable storage area network (SAN) standard. In the kind of cutesy marketing-speak that makes people want to set fire to blog posts, The Borg reckons its 32G-ready, 768-16G-port MDS 9718 Director is called “the beast …

  1. CheesyTheClown

    I'm wearing a Cisco shirt... but

    Fibre Channel is a SCSI based protocol that at one time was fantastic. It was the only option we had because virtualization solutions were first developing and bare metal servers barely were highly dependent on booting from block based storage. FC was the absolutel ultimate transitional solution for centralizing storage. Since that time, far more advanced protocols have been implemented in VMware, Hyper-V and of course Linux based solutions like KVM and Xen.

    When you add it all together, FC is now antiquated, slow and has an extremely high cost overhead. This press release is proof that companies are spending millions more than needed on hardware for absolutely no apparent reason.

    40GbE for FCoE is also the dumbest idea to hit planet earth in decades. Even with SCSI multipathing (an actually dangerous hack to the SCSI spec), as 40GbE is made up of 4x10GbE connections in a port-channel, the efficiency of 40GbE is so incredibly low that it is just an absolutely massive waste of money.

    Instead of pissing away money on further on this old and useless tech, companies would be far better off building up a new FlexPod based on SD card booting, auto-deployed host-profile based stateless configurations and of course NFS or SMB3 storage networking which both natively multipath as they are UDP based protocols and scale to terabits per second instead of tens of gigabits. In addition, they don't require additional infrastructure and overpriced (and insanely inefficient) SAN storage solutions like EMC or NetApp are optional.

    In all my research (considerable as it's about 30 hours a week of my job), FC and FCoE yield approximately 1/80th the performance and gigabyte per dollar compared to NFS and SMBv3. It comes closer to 1/200th when considering the additional overhead of operations.

    That said, the new Nexus switches sound amazing (without the 16Gb FC) in the sense that they have reliable Ethernet (the cornerstone of FCoE) which can be effectively used to deliver Infiniband-grade RDMA which improves performance of SMBv3 considerably (widening the gap further) as well as providing amazing (3-4x) performance for virtual machine migration on modern hypervisors like Hyper-V and KVM.

    All that being said, for over 40 virtualization hosts, Cisco's ACI combined with Hyper-V or OpenStack can cut data center costs another 50% over either these solutions. With VMware, it's closer to about 10% due to VMware's nearly 300% higher cost of implementation and operation compared to the other two.

    1. seven of five

      Re: I'm wearing a Cisco shirt... but

      An interesting post with many uncommon conclusions. And all those great percentages. Nice.

      Now, how polite was that! :)

    2. Anonymous Coward
      Anonymous Coward

      Re: I'm wearing a Cisco shirt... but

      I see marketing is working hard today! ;)

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like